Test Report: KVM_Linux_crio 19283

                    
                      8d2418a61c606cc3028c5bf9242bf095ec458362:2024-07-17:35383
                    
                

Test fail (30/320)

Order failed test Duration
39 TestAddons/parallel/Ingress 155.12
41 TestAddons/parallel/MetricsServer 335.27
54 TestAddons/StoppedEnableDisable 154.37
173 TestMultiControlPlane/serial/StopSecondaryNode 141.8
175 TestMultiControlPlane/serial/RestartSecondaryNode 57.75
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 372.45
180 TestMultiControlPlane/serial/StopCluster 141.51
240 TestMultiNode/serial/RestartKeepsNodes 325.4
242 TestMultiNode/serial/StopMultiNode 141.17
249 TestPreload 276.77
257 TestKubernetesUpgrade 342.82
293 TestPause/serial/SecondStartNoReconfiguration 438.82
334 TestStartStop/group/old-k8s-version/serial/FirstStart 267.48
346 TestStartStop/group/embed-certs/serial/Stop 139.05
351 TestStartStop/group/no-preload/serial/Stop 138.94
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.08
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
356 TestStartStop/group/old-k8s-version/serial/DeployApp 0.46
357 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 94.97
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
363 TestStartStop/group/old-k8s-version/serial/SecondStart 709.15
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.19
367 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.15
368 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.24
369 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.3
370 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 378.83
371 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 474.47
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 360.3
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 185.55
x
+
TestAddons/parallel/Ingress (155.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-435911 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-435911 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-435911 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c68e6dcb-da12-4d99-a5b7-eb687873f149] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c68e6dcb-da12-4d99-a5b7-eb687873f149] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.002965956s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-435911 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.047424092s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-435911 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.27
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-435911 addons disable ingress-dns --alsologtostderr -v=1: (1.478322482s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-435911 addons disable ingress --alsologtostderr -v=1: (7.643358772s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-435911 -n addons-435911
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-435911 logs -n 25: (1.296414166s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:12 UTC |
	| delete  | -p download-only-865281                                                                     | download-only-865281 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:12 UTC |
	| delete  | -p download-only-285503                                                                     | download-only-285503 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:12 UTC |
	| delete  | -p download-only-840522                                                                     | download-only-840522 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:12 UTC |
	| delete  | -p download-only-865281                                                                     | download-only-865281 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-325566 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC |                     |
	|         | binary-mirror-325566                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45523                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-325566                                                                     | binary-mirror-325566 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:12 UTC |
	| addons  | enable dashboard -p                                                                         | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC |                     |
	|         | addons-435911                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC |                     |
	|         | addons-435911                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-435911 --wait=true                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:15 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:15 UTC | 17 Jul 24 17:15 UTC |
	|         | -p addons-435911                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:15 UTC | 17 Jul 24 17:15 UTC |
	|         | -p addons-435911                                                                            |                      |         |         |                     |                     |
	| addons  | addons-435911 addons disable                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | addons-435911                                                                               |                      |         |         |                     |                     |
	| ip      | addons-435911 ip                                                                            | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	| addons  | addons-435911 addons disable                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-435911 ssh cat                                                                       | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | /opt/local-path-provisioner/pvc-f3597c1f-ead9-4165-91c7-88a61a002e8f_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-435911 addons disable                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-435911 ssh curl -s                                                                   | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | addons-435911                                                                               |                      |         |         |                     |                     |
	| addons  | addons-435911 addons                                                                        | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-435911 addons                                                                        | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-435911 ip                                                                            | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:18 UTC | 17 Jul 24 17:18 UTC |
	| addons  | addons-435911 addons disable                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:18 UTC | 17 Jul 24 17:18 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-435911 addons disable                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:18 UTC | 17 Jul 24 17:18 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 17:12:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 17:12:20.366990   22585 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:12:20.367184   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:12:20.367193   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:12:20.367196   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:12:20.367357   22585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:12:20.367882   22585 out.go:298] Setting JSON to false
	I0717 17:12:20.368636   22585 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3283,"bootTime":1721233057,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 17:12:20.368687   22585 start.go:139] virtualization: kvm guest
	I0717 17:12:20.370849   22585 out.go:177] * [addons-435911] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 17:12:20.372158   22585 notify.go:220] Checking for updates...
	I0717 17:12:20.372165   22585 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 17:12:20.373709   22585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 17:12:20.375248   22585 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:12:20.376522   22585 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:12:20.377858   22585 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 17:12:20.379161   22585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 17:12:20.380429   22585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 17:12:20.411530   22585 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 17:12:20.412986   22585 start.go:297] selected driver: kvm2
	I0717 17:12:20.413011   22585 start.go:901] validating driver "kvm2" against <nil>
	I0717 17:12:20.413024   22585 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 17:12:20.413702   22585 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:12:20.413788   22585 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 17:12:20.427867   22585 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 17:12:20.427918   22585 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 17:12:20.428167   22585 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 17:12:20.428194   22585 cni.go:84] Creating CNI manager for ""
	I0717 17:12:20.428201   22585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 17:12:20.428208   22585 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 17:12:20.428272   22585 start.go:340] cluster config:
	{Name:addons-435911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-435911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:12:20.428417   22585 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:12:20.430193   22585 out.go:177] * Starting "addons-435911" primary control-plane node in "addons-435911" cluster
	I0717 17:12:20.431619   22585 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:12:20.431646   22585 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 17:12:20.431662   22585 cache.go:56] Caching tarball of preloaded images
	I0717 17:12:20.431745   22585 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 17:12:20.431758   22585 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 17:12:20.432088   22585 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/config.json ...
	I0717 17:12:20.432124   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/config.json: {Name:mkdb577ecb5b4431a5b621d57f357237d5e29122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:20.432264   22585 start.go:360] acquireMachinesLock for addons-435911: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 17:12:20.432315   22585 start.go:364] duration metric: took 35.633µs to acquireMachinesLock for "addons-435911"
	I0717 17:12:20.432337   22585 start.go:93] Provisioning new machine with config: &{Name:addons-435911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-435911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:12:20.432400   22585 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 17:12:20.434179   22585 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 17:12:20.434293   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:12:20.434332   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:12:20.448111   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I0717 17:12:20.448539   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:12:20.449067   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:12:20.449089   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:12:20.449465   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:12:20.449643   22585 main.go:141] libmachine: (addons-435911) Calling .GetMachineName
	I0717 17:12:20.449782   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:20.449904   22585 start.go:159] libmachine.API.Create for "addons-435911" (driver="kvm2")
	I0717 17:12:20.449932   22585 client.go:168] LocalClient.Create starting
	I0717 17:12:20.449965   22585 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 17:12:20.701602   22585 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 17:12:20.890648   22585 main.go:141] libmachine: Running pre-create checks...
	I0717 17:12:20.890668   22585 main.go:141] libmachine: (addons-435911) Calling .PreCreateCheck
	I0717 17:12:20.891180   22585 main.go:141] libmachine: (addons-435911) Calling .GetConfigRaw
	I0717 17:12:20.891595   22585 main.go:141] libmachine: Creating machine...
	I0717 17:12:20.891615   22585 main.go:141] libmachine: (addons-435911) Calling .Create
	I0717 17:12:20.891772   22585 main.go:141] libmachine: (addons-435911) Creating KVM machine...
	I0717 17:12:20.893174   22585 main.go:141] libmachine: (addons-435911) DBG | found existing default KVM network
	I0717 17:12:20.893930   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:20.893777   22607 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0717 17:12:20.893957   22585 main.go:141] libmachine: (addons-435911) DBG | created network xml: 
	I0717 17:12:20.893972   22585 main.go:141] libmachine: (addons-435911) DBG | <network>
	I0717 17:12:20.893980   22585 main.go:141] libmachine: (addons-435911) DBG |   <name>mk-addons-435911</name>
	I0717 17:12:20.894039   22585 main.go:141] libmachine: (addons-435911) DBG |   <dns enable='no'/>
	I0717 17:12:20.894068   22585 main.go:141] libmachine: (addons-435911) DBG |   
	I0717 17:12:20.894079   22585 main.go:141] libmachine: (addons-435911) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 17:12:20.894087   22585 main.go:141] libmachine: (addons-435911) DBG |     <dhcp>
	I0717 17:12:20.894094   22585 main.go:141] libmachine: (addons-435911) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 17:12:20.894099   22585 main.go:141] libmachine: (addons-435911) DBG |     </dhcp>
	I0717 17:12:20.894104   22585 main.go:141] libmachine: (addons-435911) DBG |   </ip>
	I0717 17:12:20.894108   22585 main.go:141] libmachine: (addons-435911) DBG |   
	I0717 17:12:20.894114   22585 main.go:141] libmachine: (addons-435911) DBG | </network>
	I0717 17:12:20.894121   22585 main.go:141] libmachine: (addons-435911) DBG | 
	I0717 17:12:20.899544   22585 main.go:141] libmachine: (addons-435911) DBG | trying to create private KVM network mk-addons-435911 192.168.39.0/24...
	I0717 17:12:20.960024   22585 main.go:141] libmachine: (addons-435911) DBG | private KVM network mk-addons-435911 192.168.39.0/24 created
	I0717 17:12:20.960053   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:20.959980   22607 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:12:20.960090   22585 main.go:141] libmachine: (addons-435911) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911 ...
	I0717 17:12:20.960125   22585 main.go:141] libmachine: (addons-435911) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 17:12:20.960153   22585 main.go:141] libmachine: (addons-435911) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 17:12:21.190737   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:21.190626   22607 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa...
	I0717 17:12:21.271060   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:21.270962   22607 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/addons-435911.rawdisk...
	I0717 17:12:21.271088   22585 main.go:141] libmachine: (addons-435911) DBG | Writing magic tar header
	I0717 17:12:21.271104   22585 main.go:141] libmachine: (addons-435911) DBG | Writing SSH key tar header
	I0717 17:12:21.271653   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:21.271575   22607 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911 ...
	I0717 17:12:21.271690   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911
	I0717 17:12:21.271705   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 17:12:21.271719   22585 main.go:141] libmachine: (addons-435911) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911 (perms=drwx------)
	I0717 17:12:21.271730   22585 main.go:141] libmachine: (addons-435911) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 17:12:21.271736   22585 main.go:141] libmachine: (addons-435911) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 17:12:21.271743   22585 main.go:141] libmachine: (addons-435911) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 17:12:21.271748   22585 main.go:141] libmachine: (addons-435911) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 17:12:21.271759   22585 main.go:141] libmachine: (addons-435911) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 17:12:21.271767   22585 main.go:141] libmachine: (addons-435911) Creating domain...
	I0717 17:12:21.271777   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:12:21.271792   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 17:12:21.271798   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 17:12:21.271804   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home/jenkins
	I0717 17:12:21.271812   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home
	I0717 17:12:21.271821   22585 main.go:141] libmachine: (addons-435911) DBG | Skipping /home - not owner
	I0717 17:12:21.272935   22585 main.go:141] libmachine: (addons-435911) define libvirt domain using xml: 
	I0717 17:12:21.272975   22585 main.go:141] libmachine: (addons-435911) <domain type='kvm'>
	I0717 17:12:21.272984   22585 main.go:141] libmachine: (addons-435911)   <name>addons-435911</name>
	I0717 17:12:21.272989   22585 main.go:141] libmachine: (addons-435911)   <memory unit='MiB'>4000</memory>
	I0717 17:12:21.272995   22585 main.go:141] libmachine: (addons-435911)   <vcpu>2</vcpu>
	I0717 17:12:21.273001   22585 main.go:141] libmachine: (addons-435911)   <features>
	I0717 17:12:21.273032   22585 main.go:141] libmachine: (addons-435911)     <acpi/>
	I0717 17:12:21.273054   22585 main.go:141] libmachine: (addons-435911)     <apic/>
	I0717 17:12:21.273074   22585 main.go:141] libmachine: (addons-435911)     <pae/>
	I0717 17:12:21.273088   22585 main.go:141] libmachine: (addons-435911)     
	I0717 17:12:21.273100   22585 main.go:141] libmachine: (addons-435911)   </features>
	I0717 17:12:21.273113   22585 main.go:141] libmachine: (addons-435911)   <cpu mode='host-passthrough'>
	I0717 17:12:21.273121   22585 main.go:141] libmachine: (addons-435911)   
	I0717 17:12:21.273134   22585 main.go:141] libmachine: (addons-435911)   </cpu>
	I0717 17:12:21.273145   22585 main.go:141] libmachine: (addons-435911)   <os>
	I0717 17:12:21.273154   22585 main.go:141] libmachine: (addons-435911)     <type>hvm</type>
	I0717 17:12:21.273163   22585 main.go:141] libmachine: (addons-435911)     <boot dev='cdrom'/>
	I0717 17:12:21.273167   22585 main.go:141] libmachine: (addons-435911)     <boot dev='hd'/>
	I0717 17:12:21.273173   22585 main.go:141] libmachine: (addons-435911)     <bootmenu enable='no'/>
	I0717 17:12:21.273179   22585 main.go:141] libmachine: (addons-435911)   </os>
	I0717 17:12:21.273189   22585 main.go:141] libmachine: (addons-435911)   <devices>
	I0717 17:12:21.273201   22585 main.go:141] libmachine: (addons-435911)     <disk type='file' device='cdrom'>
	I0717 17:12:21.273210   22585 main.go:141] libmachine: (addons-435911)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/boot2docker.iso'/>
	I0717 17:12:21.273217   22585 main.go:141] libmachine: (addons-435911)       <target dev='hdc' bus='scsi'/>
	I0717 17:12:21.273223   22585 main.go:141] libmachine: (addons-435911)       <readonly/>
	I0717 17:12:21.273229   22585 main.go:141] libmachine: (addons-435911)     </disk>
	I0717 17:12:21.273235   22585 main.go:141] libmachine: (addons-435911)     <disk type='file' device='disk'>
	I0717 17:12:21.273243   22585 main.go:141] libmachine: (addons-435911)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 17:12:21.273251   22585 main.go:141] libmachine: (addons-435911)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/addons-435911.rawdisk'/>
	I0717 17:12:21.273262   22585 main.go:141] libmachine: (addons-435911)       <target dev='hda' bus='virtio'/>
	I0717 17:12:21.273267   22585 main.go:141] libmachine: (addons-435911)     </disk>
	I0717 17:12:21.273276   22585 main.go:141] libmachine: (addons-435911)     <interface type='network'>
	I0717 17:12:21.273282   22585 main.go:141] libmachine: (addons-435911)       <source network='mk-addons-435911'/>
	I0717 17:12:21.273286   22585 main.go:141] libmachine: (addons-435911)       <model type='virtio'/>
	I0717 17:12:21.273292   22585 main.go:141] libmachine: (addons-435911)     </interface>
	I0717 17:12:21.273299   22585 main.go:141] libmachine: (addons-435911)     <interface type='network'>
	I0717 17:12:21.273305   22585 main.go:141] libmachine: (addons-435911)       <source network='default'/>
	I0717 17:12:21.273310   22585 main.go:141] libmachine: (addons-435911)       <model type='virtio'/>
	I0717 17:12:21.273318   22585 main.go:141] libmachine: (addons-435911)     </interface>
	I0717 17:12:21.273322   22585 main.go:141] libmachine: (addons-435911)     <serial type='pty'>
	I0717 17:12:21.273329   22585 main.go:141] libmachine: (addons-435911)       <target port='0'/>
	I0717 17:12:21.273333   22585 main.go:141] libmachine: (addons-435911)     </serial>
	I0717 17:12:21.273345   22585 main.go:141] libmachine: (addons-435911)     <console type='pty'>
	I0717 17:12:21.273354   22585 main.go:141] libmachine: (addons-435911)       <target type='serial' port='0'/>
	I0717 17:12:21.273360   22585 main.go:141] libmachine: (addons-435911)     </console>
	I0717 17:12:21.273372   22585 main.go:141] libmachine: (addons-435911)     <rng model='virtio'>
	I0717 17:12:21.273381   22585 main.go:141] libmachine: (addons-435911)       <backend model='random'>/dev/random</backend>
	I0717 17:12:21.273388   22585 main.go:141] libmachine: (addons-435911)     </rng>
	I0717 17:12:21.273393   22585 main.go:141] libmachine: (addons-435911)     
	I0717 17:12:21.273400   22585 main.go:141] libmachine: (addons-435911)     
	I0717 17:12:21.273405   22585 main.go:141] libmachine: (addons-435911)   </devices>
	I0717 17:12:21.273409   22585 main.go:141] libmachine: (addons-435911) </domain>
	I0717 17:12:21.273416   22585 main.go:141] libmachine: (addons-435911) 
	I0717 17:12:21.279156   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:24:c5:64 in network default
	I0717 17:12:21.279689   22585 main.go:141] libmachine: (addons-435911) Ensuring networks are active...
	I0717 17:12:21.279706   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:21.280307   22585 main.go:141] libmachine: (addons-435911) Ensuring network default is active
	I0717 17:12:21.280635   22585 main.go:141] libmachine: (addons-435911) Ensuring network mk-addons-435911 is active
	I0717 17:12:21.281111   22585 main.go:141] libmachine: (addons-435911) Getting domain xml...
	I0717 17:12:21.281739   22585 main.go:141] libmachine: (addons-435911) Creating domain...
	I0717 17:12:22.663364   22585 main.go:141] libmachine: (addons-435911) Waiting to get IP...
	I0717 17:12:22.664232   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:22.664615   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:22.664646   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:22.664600   22607 retry.go:31] will retry after 247.523027ms: waiting for machine to come up
	I0717 17:12:22.914133   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:22.914537   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:22.914561   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:22.914504   22607 retry.go:31] will retry after 330.822603ms: waiting for machine to come up
	I0717 17:12:23.246937   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:23.247313   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:23.247342   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:23.247269   22607 retry.go:31] will retry after 384.776946ms: waiting for machine to come up
	I0717 17:12:23.633885   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:23.634274   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:23.634298   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:23.634225   22607 retry.go:31] will retry after 371.079585ms: waiting for machine to come up
	I0717 17:12:24.006814   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:24.007316   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:24.007359   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:24.007284   22607 retry.go:31] will retry after 675.440496ms: waiting for machine to come up
	I0717 17:12:24.684266   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:24.684682   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:24.684702   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:24.684662   22607 retry.go:31] will retry after 718.016746ms: waiting for machine to come up
	I0717 17:12:25.404589   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:25.405027   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:25.405045   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:25.405013   22607 retry.go:31] will retry after 828.529004ms: waiting for machine to come up
	I0717 17:12:26.235561   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:26.235986   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:26.236010   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:26.235972   22607 retry.go:31] will retry after 1.204384515s: waiting for machine to come up
	I0717 17:12:27.442372   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:27.442919   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:27.442949   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:27.442884   22607 retry.go:31] will retry after 1.146713076s: waiting for machine to come up
	I0717 17:12:28.591279   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:28.591820   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:28.591849   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:28.591723   22607 retry.go:31] will retry after 1.784907319s: waiting for machine to come up
	I0717 17:12:30.378557   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:30.378986   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:30.379014   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:30.378933   22607 retry.go:31] will retry after 2.189248903s: waiting for machine to come up
	I0717 17:12:32.569289   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:32.569746   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:32.569768   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:32.569709   22607 retry.go:31] will retry after 2.991910233s: waiting for machine to come up
	I0717 17:12:35.563308   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:35.563703   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:35.563729   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:35.563675   22607 retry.go:31] will retry after 3.89189793s: waiting for machine to come up
	I0717 17:12:39.459734   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:39.460097   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:39.460117   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:39.460059   22607 retry.go:31] will retry after 5.371779373s: waiting for machine to come up
	I0717 17:12:44.836315   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:44.836725   22585 main.go:141] libmachine: (addons-435911) Found IP for machine: 192.168.39.27
	I0717 17:12:44.836749   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has current primary IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:44.836759   22585 main.go:141] libmachine: (addons-435911) Reserving static IP address...
	I0717 17:12:44.837027   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find host DHCP lease matching {name: "addons-435911", mac: "52:54:00:9b:64:f4", ip: "192.168.39.27"} in network mk-addons-435911
	I0717 17:12:44.903693   22585 main.go:141] libmachine: (addons-435911) DBG | Getting to WaitForSSH function...
	I0717 17:12:44.903720   22585 main.go:141] libmachine: (addons-435911) Reserved static IP address: 192.168.39.27
	I0717 17:12:44.903760   22585 main.go:141] libmachine: (addons-435911) Waiting for SSH to be available...
	I0717 17:12:44.905971   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:44.906372   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911
	I0717 17:12:44.906398   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find defined IP address of network mk-addons-435911 interface with MAC address 52:54:00:9b:64:f4
	I0717 17:12:44.906547   22585 main.go:141] libmachine: (addons-435911) DBG | Using SSH client type: external
	I0717 17:12:44.906572   22585 main.go:141] libmachine: (addons-435911) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa (-rw-------)
	I0717 17:12:44.906616   22585 main.go:141] libmachine: (addons-435911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 17:12:44.906645   22585 main.go:141] libmachine: (addons-435911) DBG | About to run SSH command:
	I0717 17:12:44.906680   22585 main.go:141] libmachine: (addons-435911) DBG | exit 0
	I0717 17:12:44.917214   22585 main.go:141] libmachine: (addons-435911) DBG | SSH cmd err, output: exit status 255: 
	I0717 17:12:44.917239   22585 main.go:141] libmachine: (addons-435911) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 17:12:44.917249   22585 main.go:141] libmachine: (addons-435911) DBG | command : exit 0
	I0717 17:12:44.917261   22585 main.go:141] libmachine: (addons-435911) DBG | err     : exit status 255
	I0717 17:12:44.917272   22585 main.go:141] libmachine: (addons-435911) DBG | output  : 
	I0717 17:12:47.918853   22585 main.go:141] libmachine: (addons-435911) DBG | Getting to WaitForSSH function...
	I0717 17:12:47.921231   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:47.921588   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:47.921616   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:47.921713   22585 main.go:141] libmachine: (addons-435911) DBG | Using SSH client type: external
	I0717 17:12:47.921754   22585 main.go:141] libmachine: (addons-435911) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa (-rw-------)
	I0717 17:12:47.921776   22585 main.go:141] libmachine: (addons-435911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 17:12:47.921785   22585 main.go:141] libmachine: (addons-435911) DBG | About to run SSH command:
	I0717 17:12:47.921794   22585 main.go:141] libmachine: (addons-435911) DBG | exit 0
	I0717 17:12:48.048760   22585 main.go:141] libmachine: (addons-435911) DBG | SSH cmd err, output: <nil>: 
	I0717 17:12:48.049073   22585 main.go:141] libmachine: (addons-435911) KVM machine creation complete!
	I0717 17:12:48.049426   22585 main.go:141] libmachine: (addons-435911) Calling .GetConfigRaw
	I0717 17:12:48.049923   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:48.050199   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:48.050337   22585 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 17:12:48.050351   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:12:48.051580   22585 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 17:12:48.051602   22585 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 17:12:48.051617   22585 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 17:12:48.051625   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.054895   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.055306   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.055332   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.055448   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:48.055637   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.055813   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.055948   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:48.056100   22585 main.go:141] libmachine: Using SSH client type: native
	I0717 17:12:48.056323   22585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0717 17:12:48.056336   22585 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 17:12:48.164094   22585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:12:48.164118   22585 main.go:141] libmachine: Detecting the provisioner...
	I0717 17:12:48.164126   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.167033   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.167405   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.167435   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.167616   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:48.167834   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.168053   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.168214   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:48.168391   22585 main.go:141] libmachine: Using SSH client type: native
	I0717 17:12:48.168586   22585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0717 17:12:48.168598   22585 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 17:12:48.280967   22585 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 17:12:48.281039   22585 main.go:141] libmachine: found compatible host: buildroot
	I0717 17:12:48.281046   22585 main.go:141] libmachine: Provisioning with buildroot...
	I0717 17:12:48.281053   22585 main.go:141] libmachine: (addons-435911) Calling .GetMachineName
	I0717 17:12:48.281275   22585 buildroot.go:166] provisioning hostname "addons-435911"
	I0717 17:12:48.281299   22585 main.go:141] libmachine: (addons-435911) Calling .GetMachineName
	I0717 17:12:48.281493   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.283850   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.284164   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.284188   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.284304   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:48.284476   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.284608   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.284718   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:48.284886   22585 main.go:141] libmachine: Using SSH client type: native
	I0717 17:12:48.285074   22585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0717 17:12:48.285087   22585 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-435911 && echo "addons-435911" | sudo tee /etc/hostname
	I0717 17:12:48.410069   22585 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-435911
	
	I0717 17:12:48.410094   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.412902   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.413231   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.413258   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.413425   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:48.413613   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.413764   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.413903   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:48.414052   22585 main.go:141] libmachine: Using SSH client type: native
	I0717 17:12:48.414220   22585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0717 17:12:48.414236   22585 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-435911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-435911/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-435911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 17:12:48.532314   22585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:12:48.532344   22585 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 17:12:48.532370   22585 buildroot.go:174] setting up certificates
	I0717 17:12:48.532379   22585 provision.go:84] configureAuth start
	I0717 17:12:48.532387   22585 main.go:141] libmachine: (addons-435911) Calling .GetMachineName
	I0717 17:12:48.532630   22585 main.go:141] libmachine: (addons-435911) Calling .GetIP
	I0717 17:12:48.535212   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.535528   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.535554   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.535720   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.537747   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.538049   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.538078   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.538206   22585 provision.go:143] copyHostCerts
	I0717 17:12:48.538294   22585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 17:12:48.538424   22585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 17:12:48.538491   22585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 17:12:48.538550   22585 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.addons-435911 san=[127.0.0.1 192.168.39.27 addons-435911 localhost minikube]
	I0717 17:12:48.622659   22585 provision.go:177] copyRemoteCerts
	I0717 17:12:48.622715   22585 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 17:12:48.622739   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.625089   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.625450   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.625479   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.625676   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:48.625864   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.626027   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:48.626143   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:12:48.710270   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 17:12:48.732149   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 17:12:48.753354   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 17:12:48.774420   22585 provision.go:87] duration metric: took 242.030333ms to configureAuth
	I0717 17:12:48.774446   22585 buildroot.go:189] setting minikube options for container-runtime
	I0717 17:12:48.774642   22585 config.go:182] Loaded profile config "addons-435911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:12:48.774725   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.777231   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.777637   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.777666   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.777912   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:48.778066   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.778218   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.778341   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:48.778505   22585 main.go:141] libmachine: Using SSH client type: native
	I0717 17:12:48.778710   22585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0717 17:12:48.778726   22585 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 17:12:49.032520   22585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 17:12:49.032549   22585 main.go:141] libmachine: Checking connection to Docker...
	I0717 17:12:49.032565   22585 main.go:141] libmachine: (addons-435911) Calling .GetURL
	I0717 17:12:49.033829   22585 main.go:141] libmachine: (addons-435911) DBG | Using libvirt version 6000000
	I0717 17:12:49.035798   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.036113   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.036143   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.036315   22585 main.go:141] libmachine: Docker is up and running!
	I0717 17:12:49.036345   22585 main.go:141] libmachine: Reticulating splines...
	I0717 17:12:49.036353   22585 client.go:171] duration metric: took 28.586414531s to LocalClient.Create
	I0717 17:12:49.036381   22585 start.go:167] duration metric: took 28.586477393s to libmachine.API.Create "addons-435911"
	I0717 17:12:49.036392   22585 start.go:293] postStartSetup for "addons-435911" (driver="kvm2")
	I0717 17:12:49.036405   22585 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 17:12:49.036420   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:49.036654   22585 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 17:12:49.036677   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:49.038670   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.038978   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.039013   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.039149   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:49.039343   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:49.039557   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:49.039747   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:12:49.126359   22585 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 17:12:49.129885   22585 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 17:12:49.129906   22585 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 17:12:49.129971   22585 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 17:12:49.130003   22585 start.go:296] duration metric: took 93.60127ms for postStartSetup
	I0717 17:12:49.130037   22585 main.go:141] libmachine: (addons-435911) Calling .GetConfigRaw
	I0717 17:12:49.130544   22585 main.go:141] libmachine: (addons-435911) Calling .GetIP
	I0717 17:12:49.132876   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.133220   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.133242   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.133505   22585 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/config.json ...
	I0717 17:12:49.133872   22585 start.go:128] duration metric: took 28.701458337s to createHost
	I0717 17:12:49.133914   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:49.135858   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.136178   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.136203   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.136361   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:49.136506   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:49.136672   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:49.136925   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:49.137124   22585 main.go:141] libmachine: Using SSH client type: native
	I0717 17:12:49.137269   22585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0717 17:12:49.137279   22585 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 17:12:49.249043   22585 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721236369.232773094
	
	I0717 17:12:49.249062   22585 fix.go:216] guest clock: 1721236369.232773094
	I0717 17:12:49.249071   22585 fix.go:229] Guest: 2024-07-17 17:12:49.232773094 +0000 UTC Remote: 2024-07-17 17:12:49.133891028 +0000 UTC m=+28.797781974 (delta=98.882066ms)
	I0717 17:12:49.249122   22585 fix.go:200] guest clock delta is within tolerance: 98.882066ms
	I0717 17:12:49.249133   22585 start.go:83] releasing machines lock for "addons-435911", held for 28.816804737s
	I0717 17:12:49.249164   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:49.249442   22585 main.go:141] libmachine: (addons-435911) Calling .GetIP
	I0717 17:12:49.251770   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.252124   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.252157   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.252326   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:49.252744   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:49.252902   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:49.252997   22585 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 17:12:49.253047   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:49.253098   22585 ssh_runner.go:195] Run: cat /version.json
	I0717 17:12:49.253121   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:49.255579   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.255900   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.255927   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.255944   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.256131   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:49.256291   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:49.256298   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.256324   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.256426   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:49.256489   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:49.256550   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:12:49.256637   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:49.256752   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:49.256896   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:12:49.395192   22585 ssh_runner.go:195] Run: systemctl --version
	I0717 17:12:49.400814   22585 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 17:12:49.559240   22585 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 17:12:49.564537   22585 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 17:12:49.564604   22585 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 17:12:49.579940   22585 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 17:12:49.579968   22585 start.go:495] detecting cgroup driver to use...
	I0717 17:12:49.580029   22585 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 17:12:49.596395   22585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 17:12:49.609240   22585 docker.go:217] disabling cri-docker service (if available) ...
	I0717 17:12:49.609285   22585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 17:12:49.621479   22585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 17:12:49.633458   22585 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 17:12:49.738766   22585 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 17:12:49.872432   22585 docker.go:233] disabling docker service ...
	I0717 17:12:49.872506   22585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 17:12:49.886498   22585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 17:12:49.898345   22585 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 17:12:50.022237   22585 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 17:12:50.151491   22585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 17:12:50.165373   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 17:12:50.182872   22585 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 17:12:50.182924   22585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.192111   22585 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 17:12:50.192165   22585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.201488   22585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.210877   22585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.220202   22585 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 17:12:50.229671   22585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.238829   22585 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.254364   22585 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.263454   22585 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 17:12:50.271714   22585 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 17:12:50.271774   22585 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 17:12:50.283367   22585 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 17:12:50.293533   22585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:12:50.412573   22585 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 17:12:50.542071   22585 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 17:12:50.542165   22585 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 17:12:50.546584   22585 start.go:563] Will wait 60s for crictl version
	I0717 17:12:50.546657   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:12:50.550138   22585 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 17:12:50.588068   22585 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 17:12:50.588171   22585 ssh_runner.go:195] Run: crio --version
	I0717 17:12:50.613570   22585 ssh_runner.go:195] Run: crio --version
	I0717 17:12:50.640793   22585 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 17:12:50.642142   22585 main.go:141] libmachine: (addons-435911) Calling .GetIP
	I0717 17:12:50.644630   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:50.644980   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:50.645006   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:50.645190   22585 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 17:12:50.648870   22585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:12:50.660006   22585 kubeadm.go:883] updating cluster {Name:addons-435911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-435911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 17:12:50.660100   22585 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:12:50.660136   22585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 17:12:50.689506   22585 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 17:12:50.689565   22585 ssh_runner.go:195] Run: which lz4
	I0717 17:12:50.692956   22585 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 17:12:50.696514   22585 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 17:12:50.696540   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 17:12:51.816361   22585 crio.go:462] duration metric: took 1.123440309s to copy over tarball
	I0717 17:12:51.816426   22585 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 17:12:53.941290   22585 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.124828904s)
	I0717 17:12:53.941322   22585 crio.go:469] duration metric: took 2.124931521s to extract the tarball
	I0717 17:12:53.941331   22585 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 17:12:53.978909   22585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 17:12:54.017860   22585 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 17:12:54.017881   22585 cache_images.go:84] Images are preloaded, skipping loading
	I0717 17:12:54.017889   22585 kubeadm.go:934] updating node { 192.168.39.27 8443 v1.30.2 crio true true} ...
	I0717 17:12:54.017992   22585 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-435911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-435911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 17:12:54.018059   22585 ssh_runner.go:195] Run: crio config
	I0717 17:12:54.064558   22585 cni.go:84] Creating CNI manager for ""
	I0717 17:12:54.064582   22585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 17:12:54.064599   22585 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 17:12:54.064618   22585 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.27 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-435911 NodeName:addons-435911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.27 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 17:12:54.064748   22585 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.27
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-435911"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.27
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.27"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 17:12:54.064802   22585 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 17:12:54.074116   22585 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 17:12:54.074175   22585 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 17:12:54.082778   22585 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0717 17:12:54.097885   22585 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 17:12:54.112059   22585 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0717 17:12:54.126647   22585 ssh_runner.go:195] Run: grep 192.168.39.27	control-plane.minikube.internal$ /etc/hosts
	I0717 17:12:54.130044   22585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:12:54.140671   22585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:12:54.266939   22585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:12:54.282934   22585 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911 for IP: 192.168.39.27
	I0717 17:12:54.282960   22585 certs.go:194] generating shared ca certs ...
	I0717 17:12:54.282987   22585 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.283187   22585 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 17:12:54.473224   22585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt ...
	I0717 17:12:54.473253   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt: {Name:mk17882ef5dcf40e93d7619736a48c61e30e328f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.473427   22585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key ...
	I0717 17:12:54.473439   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key: {Name:mk0fca5350592dfe5ae9d9677aec02e7fe7cc35c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.473507   22585 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 17:12:54.586696   22585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt ...
	I0717 17:12:54.586720   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt: {Name:mk4eea84367f846b920e703dd452e9f97fd8ad6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.586863   22585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key ...
	I0717 17:12:54.586872   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key: {Name:mkf201638f64cc3da374fe05d83585c5e0d0e704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.586935   22585 certs.go:256] generating profile certs ...
	I0717 17:12:54.586986   22585 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.key
	I0717 17:12:54.586999   22585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt with IP's: []
	I0717 17:12:54.668550   22585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt ...
	I0717 17:12:54.668576   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: {Name:mk357a8842a686268c508f5a902817e5bdcbe059 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.668719   22585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.key ...
	I0717 17:12:54.668728   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.key: {Name:mk4dc1c4180c409187e71d4006f58e4110a1c65a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.668793   22585 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.key.fd341990
	I0717 17:12:54.668810   22585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.crt.fd341990 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.27]
	I0717 17:12:54.931866   22585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.crt.fd341990 ...
	I0717 17:12:54.931895   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.crt.fd341990: {Name:mk982a56b4590d26e0b84c44a3e89439bfaadaab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.932043   22585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.key.fd341990 ...
	I0717 17:12:54.932055   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.key.fd341990: {Name:mk0f5fca9e43e6ff2c28cbdea47a8aba49c8ceb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.932122   22585 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.crt.fd341990 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.crt
	I0717 17:12:54.932217   22585 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.key.fd341990 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.key
	I0717 17:12:54.932268   22585 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.key
	I0717 17:12:54.932285   22585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.crt with IP's: []
	I0717 17:12:55.135230   22585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.crt ...
	I0717 17:12:55.135262   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.crt: {Name:mk7b35d8183089ba13b7664c58a1b1bac1809062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:55.135441   22585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.key ...
	I0717 17:12:55.135455   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.key: {Name:mk5be018df9a3c93dbcf168de48d35577e14e28c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:55.135649   22585 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 17:12:55.135683   22585 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 17:12:55.135706   22585 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 17:12:55.135728   22585 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 17:12:55.136234   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 17:12:55.159926   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 17:12:55.181764   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 17:12:55.203105   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 17:12:55.223656   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 17:12:55.244466   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 17:12:55.264750   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 17:12:55.285254   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 17:12:55.305892   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 17:12:55.326163   22585 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 17:12:55.340858   22585 ssh_runner.go:195] Run: openssl version
	I0717 17:12:55.345887   22585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 17:12:55.355009   22585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:12:55.358796   22585 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:12:55.358841   22585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:12:55.363751   22585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 17:12:55.373131   22585 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 17:12:55.376542   22585 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 17:12:55.376595   22585 kubeadm.go:392] StartCluster: {Name:addons-435911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-435911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:12:55.376664   22585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 17:12:55.376710   22585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 17:12:55.413278   22585 cri.go:89] found id: ""
	I0717 17:12:55.413375   22585 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 17:12:55.422460   22585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 17:12:55.431257   22585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 17:12:55.439966   22585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 17:12:55.439994   22585 kubeadm.go:157] found existing configuration files:
	
	I0717 17:12:55.440043   22585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 17:12:55.448232   22585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 17:12:55.448281   22585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 17:12:55.457408   22585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 17:12:55.465504   22585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 17:12:55.465549   22585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 17:12:55.473958   22585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 17:12:55.481917   22585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 17:12:55.481965   22585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 17:12:55.490266   22585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 17:12:55.498275   22585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 17:12:55.498329   22585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 17:12:55.506459   22585 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 17:12:55.566641   22585 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 17:12:55.566715   22585 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 17:12:55.692512   22585 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 17:12:55.692640   22585 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 17:12:55.692783   22585 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 17:12:55.898992   22585 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 17:12:56.068000   22585 out.go:204]   - Generating certificates and keys ...
	I0717 17:12:56.068114   22585 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 17:12:56.068186   22585 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 17:12:56.177420   22585 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 17:12:56.307917   22585 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 17:12:56.550912   22585 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 17:12:56.837583   22585 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 17:12:56.967747   22585 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 17:12:56.967962   22585 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-435911 localhost] and IPs [192.168.39.27 127.0.0.1 ::1]
	I0717 17:12:57.343309   22585 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 17:12:57.343455   22585 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-435911 localhost] and IPs [192.168.39.27 127.0.0.1 ::1]
	I0717 17:12:57.471197   22585 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 17:12:57.602649   22585 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 17:12:57.817098   22585 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 17:12:57.817247   22585 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 17:12:57.967075   22585 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 17:12:58.337958   22585 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 17:12:58.522373   22585 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 17:12:58.690117   22585 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 17:12:58.902991   22585 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 17:12:58.903540   22585 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 17:12:58.905931   22585 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 17:12:58.907839   22585 out.go:204]   - Booting up control plane ...
	I0717 17:12:58.907968   22585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 17:12:58.908591   22585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 17:12:58.909329   22585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 17:12:58.922822   22585 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 17:12:58.923787   22585 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 17:12:58.923848   22585 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 17:12:59.071566   22585 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 17:12:59.071657   22585 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 17:13:00.072967   22585 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001727376s
	I0717 17:13:00.073089   22585 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 17:13:04.573653   22585 kubeadm.go:310] [api-check] The API server is healthy after 4.502109739s
	I0717 17:13:04.585369   22585 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 17:13:04.603555   22585 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 17:13:04.640321   22585 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 17:13:04.640503   22585 kubeadm.go:310] [mark-control-plane] Marking the node addons-435911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 17:13:04.652822   22585 kubeadm.go:310] [bootstrap-token] Using token: ch7c38.n9iekpckubhriss0
	I0717 17:13:04.654043   22585 out.go:204]   - Configuring RBAC rules ...
	I0717 17:13:04.654161   22585 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 17:13:04.659894   22585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 17:13:04.668604   22585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 17:13:04.671334   22585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 17:13:04.674287   22585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 17:13:04.677592   22585 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 17:13:04.981205   22585 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 17:13:05.418221   22585 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 17:13:05.983124   22585 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 17:13:05.984106   22585 kubeadm.go:310] 
	I0717 17:13:05.984196   22585 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 17:13:05.984214   22585 kubeadm.go:310] 
	I0717 17:13:05.984319   22585 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 17:13:05.984335   22585 kubeadm.go:310] 
	I0717 17:13:05.984376   22585 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 17:13:05.984458   22585 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 17:13:05.984628   22585 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 17:13:05.984647   22585 kubeadm.go:310] 
	I0717 17:13:05.984722   22585 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 17:13:05.984735   22585 kubeadm.go:310] 
	I0717 17:13:05.984805   22585 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 17:13:05.984813   22585 kubeadm.go:310] 
	I0717 17:13:05.984854   22585 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 17:13:05.984916   22585 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 17:13:05.985006   22585 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 17:13:05.985014   22585 kubeadm.go:310] 
	I0717 17:13:05.985081   22585 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 17:13:05.985146   22585 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 17:13:05.985152   22585 kubeadm.go:310] 
	I0717 17:13:05.985270   22585 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ch7c38.n9iekpckubhriss0 \
	I0717 17:13:05.985427   22585 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 17:13:05.985467   22585 kubeadm.go:310] 	--control-plane 
	I0717 17:13:05.985476   22585 kubeadm.go:310] 
	I0717 17:13:05.985583   22585 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 17:13:05.985594   22585 kubeadm.go:310] 
	I0717 17:13:05.985696   22585 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ch7c38.n9iekpckubhriss0 \
	I0717 17:13:05.985809   22585 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 17:13:05.986174   22585 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 17:13:05.986194   22585 cni.go:84] Creating CNI manager for ""
	I0717 17:13:05.986201   22585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 17:13:05.987558   22585 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 17:13:05.988693   22585 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 17:13:05.998610   22585 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 17:13:06.017108   22585 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 17:13:06.017181   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:06.017196   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-435911 minikube.k8s.io/updated_at=2024_07_17T17_13_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=addons-435911 minikube.k8s.io/primary=true
	I0717 17:13:06.044030   22585 ops.go:34] apiserver oom_adj: -16
	I0717 17:13:06.149487   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:06.650151   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:07.150169   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:07.650579   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:08.149630   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:08.650545   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:09.149767   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:09.649718   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:10.150152   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:10.650485   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:11.149879   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:11.649857   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:12.149693   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:12.650443   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:13.150170   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:13.649826   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:14.150148   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:14.649892   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:15.150451   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:15.649910   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:16.150484   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:16.649656   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:17.150558   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:17.650581   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:18.149764   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:18.650282   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:19.149582   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:19.258555   22585 kubeadm.go:1113] duration metric: took 13.241430959s to wait for elevateKubeSystemPrivileges
	I0717 17:13:19.258595   22585 kubeadm.go:394] duration metric: took 23.882003299s to StartCluster
	I0717 17:13:19.258620   22585 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:13:19.258753   22585 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:13:19.259238   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:13:19.259452   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 17:13:19.259480   22585 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:13:19.259547   22585 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 17:13:19.259649   22585 addons.go:69] Setting yakd=true in profile "addons-435911"
	I0717 17:13:19.259678   22585 addons.go:234] Setting addon yakd=true in "addons-435911"
	I0717 17:13:19.259706   22585 addons.go:69] Setting gcp-auth=true in profile "addons-435911"
	I0717 17:13:19.259734   22585 mustload.go:65] Loading cluster: addons-435911
	I0717 17:13:19.259735   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.259733   22585 config.go:182] Loaded profile config "addons-435911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:13:19.259687   22585 addons.go:69] Setting cloud-spanner=true in profile "addons-435911"
	I0717 17:13:19.259789   22585 addons.go:69] Setting storage-provisioner=true in profile "addons-435911"
	I0717 17:13:19.259807   22585 addons.go:234] Setting addon cloud-spanner=true in "addons-435911"
	I0717 17:13:19.259820   22585 addons.go:234] Setting addon storage-provisioner=true in "addons-435911"
	I0717 17:13:19.259839   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.259846   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.259911   22585 config.go:182] Loaded profile config "addons-435911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:13:19.260132   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260161   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.260227   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260245   22585 addons.go:69] Setting helm-tiller=true in profile "addons-435911"
	I0717 17:13:19.260260   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.260270   22585 addons.go:234] Setting addon helm-tiller=true in "addons-435911"
	I0717 17:13:19.260301   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.260348   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260374   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.260443   22585 addons.go:69] Setting ingress-dns=true in profile "addons-435911"
	I0717 17:13:19.260446   22585 addons.go:69] Setting ingress=true in profile "addons-435911"
	I0717 17:13:19.260479   22585 addons.go:234] Setting addon ingress=true in "addons-435911"
	I0717 17:13:19.260479   22585 addons.go:234] Setting addon ingress-dns=true in "addons-435911"
	I0717 17:13:19.260514   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.260517   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.260618   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260640   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.259693   22585 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-435911"
	I0717 17:13:19.260767   22585 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-435911"
	I0717 17:13:19.260776   22585 addons.go:69] Setting registry=true in profile "addons-435911"
	I0717 17:13:19.260802   22585 addons.go:234] Setting addon registry=true in "addons-435911"
	I0717 17:13:19.260803   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.260829   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.260236   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260873   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260882   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.260895   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.260893   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260928   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.261184   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.261198   22585 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-435911"
	I0717 17:13:19.261213   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.261221   22585 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-435911"
	I0717 17:13:19.261243   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.259702   22585 addons.go:69] Setting metrics-server=true in profile "addons-435911"
	I0717 17:13:19.261426   22585 addons.go:234] Setting addon metrics-server=true in "addons-435911"
	I0717 17:13:19.261453   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.261547   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.261565   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.261862   22585 addons.go:69] Setting volcano=true in profile "addons-435911"
	I0717 17:13:19.261895   22585 addons.go:234] Setting addon volcano=true in "addons-435911"
	I0717 17:13:19.261925   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.262288   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.262335   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.263313   22585 out.go:177] * Verifying Kubernetes components...
	I0717 17:13:19.263320   22585 addons.go:69] Setting volumesnapshots=true in profile "addons-435911"
	I0717 17:13:19.263360   22585 addons.go:234] Setting addon volumesnapshots=true in "addons-435911"
	I0717 17:13:19.263394   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.259671   22585 addons.go:69] Setting default-storageclass=true in profile "addons-435911"
	I0717 17:13:19.264009   22585 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-435911"
	I0717 17:13:19.266327   22585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:13:19.266408   22585 addons.go:69] Setting inspektor-gadget=true in profile "addons-435911"
	I0717 17:13:19.266431   22585 addons.go:234] Setting addon inspektor-gadget=true in "addons-435911"
	I0717 17:13:19.266454   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.266806   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.266824   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.267759   22585 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-435911"
	I0717 17:13:19.267789   22585 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-435911"
	I0717 17:13:19.261189   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.267881   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.268127   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.268143   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.282079   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0717 17:13:19.282617   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.283127   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.283149   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.283482   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.284037   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.284080   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.286386   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I0717 17:13:19.286572   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I0717 17:13:19.286945   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.287676   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.287692   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.288038   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.288221   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.288425   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I0717 17:13:19.288882   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.288896   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0717 17:13:19.288984   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.289213   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.289405   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.289430   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.289671   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.289719   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.289820   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.289841   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.289907   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.290084   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.290117   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.290252   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.290492   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.290523   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.290603   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.290641   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.290911   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39115
	I0717 17:13:19.291183   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.291283   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.291322   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.291603   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.291627   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.292184   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.292576   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0717 17:13:19.297304   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.297338   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.297633   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.297634   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.297651   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.297656   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.297994   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.298014   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.298316   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.298344   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.309290   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39069
	I0717 17:13:19.309603   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.309723   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.310661   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.310684   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.310935   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.310951   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.311124   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.311359   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.311818   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.311855   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.313465   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.313512   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.318988   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0717 17:13:19.319555   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.320083   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.320099   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.320420   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.320566   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.322454   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.324848   22585 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 17:13:19.325998   22585 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 17:13:19.326016   22585 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 17:13:19.326037   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.328268   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45953
	I0717 17:13:19.328782   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.329329   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.329345   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.329397   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.329423   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.329438   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.329850   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.329885   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.330033   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.330214   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.330377   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.330855   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.330901   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.333974   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I0717 17:13:19.337688   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35393
	I0717 17:13:19.337749   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I0717 17:13:19.338082   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0717 17:13:19.338221   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.338255   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.338472   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.338904   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.338923   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.338995   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.339011   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.339048   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.339059   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.339437   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.339472   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.340012   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.340037   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.340051   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.340069   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.340597   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36499
	I0717 17:13:19.340916   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.341025   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.342985   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
	I0717 17:13:19.343072   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.343088   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.343100   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.343741   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.343941   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.344006   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.345248   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.345266   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.345334   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.345821   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.345837   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.346465   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.346998   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.347562   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40899
	I0717 17:13:19.347680   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.347769   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.348138   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.348619   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.349376   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.349395   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.350011   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 17:13:19.350365   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.350366   22585 addons.go:234] Setting addon default-storageclass=true in "addons-435911"
	I0717 17:13:19.350421   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.350747   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.350784   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.350952   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.352520   22585 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 17:13:19.352876   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34857
	I0717 17:13:19.352893   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 17:13:19.353232   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.353676   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.353699   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.353992   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.354296   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.354367   22585 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 17:13:19.354379   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 17:13:19.354392   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.354649   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.355605   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 17:13:19.356409   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.357197   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44721
	I0717 17:13:19.357847   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.357869   22585 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-435911"
	I0717 17:13:19.357905   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.358282   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.358327   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.358396   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.358521   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0717 17:13:19.358619   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 17:13:19.358664   22585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 17:13:19.358899   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.358920   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.358938   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.359094   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.359137   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.359336   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.359470   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.359737   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44405
	I0717 17:13:19.359973   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.359990   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.360047   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.360518   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.360535   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.360748   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.360761   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.360812   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.361206   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.361215   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.361519   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.362017   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 17:13:19.362078   22585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 17:13:19.362375   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.362418   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.362829   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.364150   22585 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 17:13:19.364204   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 17:13:19.364224   22585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 17:13:19.365568   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.365612   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.365669   22585 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 17:13:19.365689   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 17:13:19.365705   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.366040   22585 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 17:13:19.366060   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 17:13:19.366076   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.367349   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 17:13:19.368560   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 17:13:19.369573   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I0717 17:13:19.369828   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 17:13:19.369850   22585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 17:13:19.369869   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.370312   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.372671   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.375717   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.375729   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.375730   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.375751   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43531
	I0717 17:13:19.375735   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.375811   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.375816   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.375830   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.375719   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.375850   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.375902   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.375920   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.376120   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.376125   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.376170   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.376362   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.376400   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.376439   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.376488   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.376563   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.376937   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.376962   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.376990   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.377006   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.377155   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.377391   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.377520   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.377583   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.377950   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.377965   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.378005   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.379470   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.381473   22585 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 17:13:19.382839   22585 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 17:13:19.382853   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44365
	I0717 17:13:19.382860   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 17:13:19.382877   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.383464   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46879
	I0717 17:13:19.383761   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.384381   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.384402   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.384559   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.384776   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.384990   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.385134   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.385156   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.386149   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.386554   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.386588   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.386764   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.386938   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.386985   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.387097   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.387220   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:19.387233   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:19.387230   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.387643   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:19.387654   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:19.387662   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:19.387668   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:19.389277   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:19.389281   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:19.389297   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	W0717 17:13:19.389396   22585 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 17:13:19.389967   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.390131   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.391949   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.395741   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46411
	I0717 17:13:19.395742   22585 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 17:13:19.396266   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.396795   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.396822   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.397225   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.397773   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0717 17:13:19.397819   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.397842   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.397950   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33921
	I0717 17:13:19.398166   22585 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 17:13:19.398183   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 17:13:19.398200   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.398428   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44489
	I0717 17:13:19.398632   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.398652   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.398967   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.399153   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.399176   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.399651   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.399674   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.399768   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.399787   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.399984   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.400109   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.400159   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.400299   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.401004   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.401212   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.402182   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.402259   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.403187   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.403188   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.403224   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.403395   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.403438   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.403663   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.403848   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.403988   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.404131   22585 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 17:13:19.404338   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0717 17:13:19.404837   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.404960   22585 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 17:13:19.405361   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.405389   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.405780   22585 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 17:13:19.406202   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.406748   22585 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 17:13:19.406769   22585 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 17:13:19.406781   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.406789   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.407228   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.407674   22585 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 17:13:19.407690   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 17:13:19.407882   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.408737   22585 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 17:13:19.410191   22585 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 17:13:19.410209   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0717 17:13:19.410226   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.411776   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.412263   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.412293   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.412431   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.412498   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.413187   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.413206   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.413391   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.413500   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40319
	I0717 17:13:19.413677   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.413827   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.414601   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.414646   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.414733   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.415399   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.415417   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.415613   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.415771   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.415918   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.416046   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.416812   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.416829   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.417780   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.418093   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.418161   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.418370   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.418799   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.420400   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.421114   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0717 17:13:19.421694   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.422238   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 17:13:19.422297   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.422312   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.422688   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.422756   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45781
	I0717 17:13:19.422905   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.423162   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.423493   22585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 17:13:19.423508   22585 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 17:13:19.423525   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.424511   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.424533   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.424602   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.424844   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.425048   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.425467   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42253
	I0717 17:13:19.425791   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.426146   22585 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 17:13:19.426307   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.426327   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.426654   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.426760   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.426823   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.427368   22585 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 17:13:19.427382   22585 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 17:13:19.427395   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.428092   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.428347   22585 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 17:13:19.428417   22585 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 17:13:19.428429   22585 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 17:13:19.428445   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.428831   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.429805   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.429832   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.430022   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.430165   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.430336   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.430611   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.431198   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.431385   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.431688   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.431708   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.431714   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.431729   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.431910   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.432042   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.432063   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.432181   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.432219   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.432316   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.432359   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.432475   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.432905   22585 out.go:177]   - Using image docker.io/busybox:stable
	W0717 17:13:19.433505   22585 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56152->192.168.39.27:22: read: connection reset by peer
	I0717 17:13:19.433528   22585 retry.go:31] will retry after 239.941694ms: ssh: handshake failed: read tcp 192.168.39.1:56152->192.168.39.27:22: read: connection reset by peer
	W0717 17:13:19.433571   22585 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56166->192.168.39.27:22: read: connection reset by peer
	I0717 17:13:19.433576   22585 retry.go:31] will retry after 252.999752ms: ssh: handshake failed: read tcp 192.168.39.1:56166->192.168.39.27:22: read: connection reset by peer
	I0717 17:13:19.434584   22585 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 17:13:19.434605   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 17:13:19.434616   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.437442   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.437817   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.437843   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.438000   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.438178   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.438342   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.438483   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.705863   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 17:13:19.744518   22585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:13:19.744758   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 17:13:19.766595   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 17:13:19.766614   22585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 17:13:19.835203   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 17:13:19.872004   22585 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 17:13:19.872029   22585 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 17:13:19.876973   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 17:13:19.887699   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 17:13:19.902999   22585 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 17:13:19.903017   22585 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 17:13:19.909574   22585 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 17:13:19.909595   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 17:13:19.960139   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 17:13:19.978621   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 17:13:19.978648   22585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 17:13:19.980168   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 17:13:19.991622   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 17:13:20.019750   22585 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 17:13:20.019775   22585 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 17:13:20.075569   22585 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 17:13:20.075593   22585 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 17:13:20.086721   22585 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 17:13:20.086738   22585 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 17:13:20.116281   22585 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 17:13:20.116301   22585 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 17:13:20.181868   22585 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 17:13:20.181889   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 17:13:20.186233   22585 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 17:13:20.186253   22585 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 17:13:20.199425   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 17:13:20.199444   22585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 17:13:20.266547   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 17:13:20.266569   22585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 17:13:20.293266   22585 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 17:13:20.293289   22585 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 17:13:20.317762   22585 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 17:13:20.317783   22585 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 17:13:20.357495   22585 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 17:13:20.357521   22585 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 17:13:20.376508   22585 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 17:13:20.376532   22585 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 17:13:20.441215   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 17:13:20.449073   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 17:13:20.449098   22585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 17:13:20.471978   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 17:13:20.508928   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 17:13:20.511076   22585 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 17:13:20.511097   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 17:13:20.533950   22585 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 17:13:20.533978   22585 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 17:13:20.622759   22585 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 17:13:20.622787   22585 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 17:13:20.665467   22585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 17:13:20.665495   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 17:13:20.736531   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 17:13:20.807876   22585 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 17:13:20.807905   22585 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 17:13:20.953250   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 17:13:20.953273   22585 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 17:13:21.071942   22585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 17:13:21.071967   22585 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 17:13:21.116194   22585 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 17:13:21.116224   22585 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 17:13:21.127859   22585 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 17:13:21.127876   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 17:13:21.264815   22585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 17:13:21.264838   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 17:13:21.268500   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 17:13:21.277441   22585 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 17:13:21.277461   22585 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 17:13:21.470115   22585 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 17:13:21.470142   22585 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 17:13:21.494235   22585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 17:13:21.494253   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 17:13:21.658226   22585 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 17:13:21.658248   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 17:13:21.764123   22585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 17:13:21.764142   22585 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 17:13:21.909230   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 17:13:22.015063   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 17:13:26.400207   22585 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 17:13:26.400241   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:26.403825   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:26.404251   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:26.404276   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:26.404469   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:26.404698   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:26.404872   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:26.405050   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:26.631674   22585 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 17:13:26.676016   22585 addons.go:234] Setting addon gcp-auth=true in "addons-435911"
	I0717 17:13:26.676071   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:26.676400   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:26.676428   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:26.691144   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36445
	I0717 17:13:26.691581   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:26.692067   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:26.692094   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:26.692389   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:26.692939   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:26.692992   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:26.708917   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0717 17:13:26.709317   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:26.709805   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:26.709835   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:26.710127   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:26.710355   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:26.711966   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:26.712193   22585 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 17:13:26.712218   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:26.715201   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:26.715628   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:26.715680   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:26.715803   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:26.716001   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:26.716158   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:26.716309   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:27.274764   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.568868073s)
	I0717 17:13:27.274813   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.274825   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.274828   22585 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.530279475s)
	I0717 17:13:27.274956   22585 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.530161645s)
	I0717 17:13:27.274982   22585 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 17:13:27.275037   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.439806821s)
	I0717 17:13:27.275078   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275089   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275126   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.398124786s)
	I0717 17:13:27.275164   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275177   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.275189   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.275202   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.387476885s)
	I0717 17:13:27.275209   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.275220   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275222   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275229   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275232   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275264   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275265   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.315095591s)
	I0717 17:13:27.275286   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275297   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275333   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.295140583s)
	I0717 17:13:27.275356   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275364   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275372   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.283728421s)
	I0717 17:13:27.275390   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275397   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275422   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.834175132s)
	I0717 17:13:27.275436   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275444   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275457   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.803450658s)
	I0717 17:13:27.275472   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275481   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275533   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.766565698s)
	I0717 17:13:27.275548   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275555   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275613   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.539054556s)
	I0717 17:13:27.275627   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275636   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275750   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.007224101s)
	W0717 17:13:27.275779   22585 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 17:13:27.275798   22585 retry.go:31] will retry after 309.615159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 17:13:27.275823   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.275856   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.275863   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.275871   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275881   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275871   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.366613453s)
	I0717 17:13:27.275929   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275936   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.276220   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.276230   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.276238   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.276245   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.276302   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.276320   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.276325   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.276332   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.276338   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.276498   22585 node_ready.go:35] waiting up to 6m0s for node "addons-435911" to be "Ready" ...
	I0717 17:13:27.276573   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.276590   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.276610   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.276628   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.276634   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.276641   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.278202   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.278213   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.278222   22585 addons.go:475] Verifying addon ingress=true in "addons-435911"
	I0717 17:13:27.279592   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.279611   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.279621   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.279643   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.279650   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.279658   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.279665   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.279712   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.279718   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.279724   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.279731   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.279770   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.279788   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.279809   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.279820   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.279828   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.279834   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.279878   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.279886   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.279893   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.279899   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.279935   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.279955   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.279961   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.279968   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.279975   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.280018   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.280038   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.280043   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.280051   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.280057   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.280198   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.280233   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.280242   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.280657   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.280682   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.280689   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.281566   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.281634   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.281642   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.281743   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.281763   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.281770   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.281918   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.281937   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.281951   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.281960   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.281966   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.281969   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.281977   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.281983   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.281989   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.279599   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.282099   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.282103   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.282111   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.282121   22585 addons.go:475] Verifying addon metrics-server=true in "addons-435911"
	I0717 17:13:27.282122   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.282134   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.282142   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.282145   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.282150   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.282215   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.282222   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.282547   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.282563   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.282573   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.282585   22585 addons.go:475] Verifying addon registry=true in "addons-435911"
	I0717 17:13:27.283871   22585 out.go:177] * Verifying ingress addon...
	I0717 17:13:27.283889   22585 out.go:177] * Verifying registry addon...
	I0717 17:13:27.283873   22585 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-435911 service yakd-dashboard -n yakd-dashboard
	
	I0717 17:13:27.285947   22585 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 17:13:27.286461   22585 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 17:13:27.298956   22585 node_ready.go:49] node "addons-435911" has status "Ready":"True"
	I0717 17:13:27.298974   22585 node_ready.go:38] duration metric: took 22.455685ms for node "addons-435911" to be "Ready" ...
	I0717 17:13:27.298984   22585 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 17:13:27.343374   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.343394   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.343651   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.343675   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.343799   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.343814   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.343835   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.343841   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.343848   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	W0717 17:13:27.343926   22585 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0717 17:13:27.353750   22585 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 17:13:27.353772   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:27.353899   22585 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 17:13:27.353913   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:27.367217   22585 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g8svc" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.401392   22585 pod_ready.go:92] pod "coredns-7db6d8ff4d-g8svc" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:27.401410   22585 pod_ready.go:81] duration metric: took 34.173211ms for pod "coredns-7db6d8ff4d-g8svc" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.401420   22585 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ktksd" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.486557   22585 pod_ready.go:92] pod "coredns-7db6d8ff4d-ktksd" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:27.486586   22585 pod_ready.go:81] duration metric: took 85.16004ms for pod "coredns-7db6d8ff4d-ktksd" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.486597   22585 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.502686   22585 pod_ready.go:92] pod "etcd-addons-435911" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:27.502710   22585 pod_ready.go:81] duration metric: took 16.106815ms for pod "etcd-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.502724   22585 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.519688   22585 pod_ready.go:92] pod "kube-apiserver-addons-435911" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:27.519707   22585 pod_ready.go:81] duration metric: took 16.977944ms for pod "kube-apiserver-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.519717   22585 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.585902   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 17:13:27.679243   22585 pod_ready.go:92] pod "kube-controller-manager-addons-435911" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:27.679265   22585 pod_ready.go:81] duration metric: took 159.541766ms for pod "kube-controller-manager-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.679283   22585 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s2kxf" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.779761   22585 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-435911" context rescaled to 1 replicas
	I0717 17:13:27.810226   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:27.812112   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:28.095268   22585 pod_ready.go:92] pod "kube-proxy-s2kxf" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:28.095289   22585 pod_ready.go:81] duration metric: took 416.000282ms for pod "kube-proxy-s2kxf" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:28.095317   22585 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:28.130754   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.115637868s)
	I0717 17:13:28.130793   22585 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.418582136s)
	I0717 17:13:28.130803   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:28.130817   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:28.131188   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:28.131236   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:28.131260   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:28.131273   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:28.131246   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:28.131487   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:28.131523   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:28.131537   22585 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-435911"
	I0717 17:13:28.132340   22585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 17:13:28.133142   22585 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 17:13:28.134684   22585 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 17:13:28.135424   22585 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 17:13:28.136073   22585 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 17:13:28.136088   22585 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 17:13:28.162119   22585 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 17:13:28.162140   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:28.218709   22585 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 17:13:28.218728   22585 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 17:13:28.239543   22585 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 17:13:28.239564   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 17:13:28.257853   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 17:13:28.294404   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:28.294678   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:28.481026   22585 pod_ready.go:92] pod "kube-scheduler-addons-435911" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:28.481054   22585 pod_ready.go:81] duration metric: took 385.728782ms for pod "kube-scheduler-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:28.481068   22585 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:28.643468   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:28.793716   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:28.796515   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:29.141494   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:29.291068   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:29.291634   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:29.406316   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.82036948s)
	I0717 17:13:29.406376   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:29.406393   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:29.406640   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:29.406678   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:29.406692   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:29.406700   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:29.406909   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:29.406925   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:29.661205   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:29.817221   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:29.817281   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:29.867825   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.60993507s)
	I0717 17:13:29.867881   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:29.867891   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:29.868165   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:29.868183   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:29.868206   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:29.868270   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:29.868277   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:29.868504   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:29.868515   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:29.868558   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:29.869889   22585 addons.go:475] Verifying addon gcp-auth=true in "addons-435911"
	I0717 17:13:29.871434   22585 out.go:177] * Verifying gcp-auth addon...
	I0717 17:13:29.873679   22585 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 17:13:29.895767   22585 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 17:13:29.895799   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:30.140660   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:30.292316   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:30.294170   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:30.387469   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:30.488638   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:30.643972   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:30.792329   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:30.792989   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:30.884719   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:31.140493   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:31.291776   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:31.291899   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:31.377684   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:31.640813   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:31.791547   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:31.793109   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:31.878065   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:32.140249   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:32.289968   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:32.291541   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:32.377561   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:32.641818   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:32.791237   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:32.791487   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:32.877163   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:32.988750   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:33.141137   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:33.290168   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:33.291866   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:33.377813   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:33.640182   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:33.791000   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:33.791493   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:33.877709   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:34.141092   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:34.290754   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:34.292652   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:34.376819   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:34.648378   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:34.791007   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:34.791366   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:34.877122   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:35.140649   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:35.298060   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:35.303637   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:35.377875   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:35.486118   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:35.648598   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:35.790916   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:35.792643   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:35.877316   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:36.140931   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:36.290769   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:36.291319   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:36.377423   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:36.771937   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:36.791995   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:36.794035   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:36.877253   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:37.142416   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:37.294458   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:37.295899   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:37.378105   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:37.486808   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:37.641006   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:37.789878   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:37.791006   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:37.876497   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:38.140920   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:38.297672   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:38.297829   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:38.377549   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:38.641027   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:38.790236   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:38.792740   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:38.877179   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:39.140385   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:39.292467   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:39.292795   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:39.377798   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:39.640313   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:39.792764   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:39.792918   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:39.878364   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:39.985831   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:40.141468   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:40.294604   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:40.294830   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:40.377851   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:40.641101   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:40.791642   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:40.792377   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:40.877812   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:41.141303   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:41.290479   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:41.293165   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:41.377860   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:41.641829   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:41.801163   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:41.802323   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:41.877739   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:41.986671   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:42.142097   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:42.560308   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:42.561443   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:42.561725   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:42.641199   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:42.793545   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:42.795432   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:42.877086   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:43.143660   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:43.289529   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:43.295135   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:43.376834   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:43.640875   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:43.796874   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:43.797541   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:43.877739   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:43.988681   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:44.141867   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:44.291643   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:44.293206   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:44.590560   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:44.776929   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:44.797756   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:44.798011   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:44.877233   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:45.141251   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:45.293243   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:45.296349   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:45.378630   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:45.643857   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:45.790896   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:45.791135   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:45.876899   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:46.141163   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:46.290132   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:46.291580   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:46.377495   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:46.486223   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:46.641310   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:46.790910   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:46.791636   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:46.877656   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:47.141596   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:47.291688   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:47.291848   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:47.377976   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:47.641520   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:47.791877   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:47.792025   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:47.877726   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:48.140656   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:48.291771   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:48.292749   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:48.377088   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:48.492345   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:48.640688   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:48.795009   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:48.795429   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:48.884648   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:49.140976   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:49.290067   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:49.292506   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:49.376491   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:49.641334   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:49.810672   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:49.812017   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:49.877727   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:50.140575   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:50.292112   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:50.293444   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:50.380328   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:50.640391   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:50.790943   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:50.792356   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:50.877250   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:50.987963   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:51.140781   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:51.291408   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:51.294802   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:51.377933   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:51.640685   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:51.790095   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:51.791560   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:51.880253   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:52.140966   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:52.291218   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:52.291223   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:52.376811   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:52.641590   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:52.792037   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:52.792285   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:52.883179   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:52.990905   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:53.141198   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:53.290578   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:53.292500   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:53.377572   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:53.640309   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:53.790801   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:53.792375   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:53.882499   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:54.141835   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:54.294795   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:54.296095   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:54.377916   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:54.640235   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:54.792333   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:54.792572   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:54.877876   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:55.141931   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:55.292715   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:55.295442   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:55.377448   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:55.486975   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:55.652236   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:55.791258   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:55.791549   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:55.877657   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:56.140899   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:56.291456   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:56.292166   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:56.376882   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:56.641246   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:56.791854   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:56.793108   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:56.876833   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:57.140100   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:57.290534   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:57.292058   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:57.376837   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:57.642749   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:57.791226   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:57.791496   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:57.877123   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:57.991113   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:58.140185   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:58.290893   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:58.290952   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:58.378281   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:58.641102   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:58.791402   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:58.791745   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:58.880000   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:59.488939   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:59.489830   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:59.490355   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:59.494910   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:59.642203   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:59.794706   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:59.794878   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:59.877389   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:00.139974   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:00.292113   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:00.292933   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:00.377028   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:00.486514   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:00.641482   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:00.790518   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:00.790878   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:00.877583   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:01.140615   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:01.289607   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:01.291384   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:01.377854   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:01.640127   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:01.792069   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:01.793178   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:01.877984   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:02.140697   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:02.289348   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:02.291741   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:02.377508   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:02.653068   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:02.791658   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:02.791806   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:02.876654   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:02.986552   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:03.139878   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:03.291004   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:03.291392   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:03.378056   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:03.640806   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:03.791111   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:03.791328   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:03.876966   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:04.140644   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:04.289618   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:04.291809   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:04.378790   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:04.953821   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:04.953898   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:04.953901   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:04.953966   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:04.991276   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:05.141477   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:05.291839   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:05.292506   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:05.377501   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:05.640404   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:05.792436   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:05.792643   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:05.877250   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:06.141004   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:06.290815   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:06.291521   22585 kapi.go:107] duration metric: took 39.005059103s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 17:14:06.377255   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:06.641789   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:06.790294   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:06.877269   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:07.140280   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:07.290103   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:07.381226   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:07.487094   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:07.641652   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:07.790703   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:07.877286   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:08.140964   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:08.296229   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:08.380241   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:08.640796   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:08.791121   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:08.878665   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:09.145575   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:09.292026   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:09.378606   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:09.487251   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:09.640564   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:09.791554   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:09.970621   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:10.140810   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:10.290435   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:10.377285   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:10.640583   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:10.789501   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:10.877544   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:11.141561   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:11.290667   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:11.377412   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:11.640310   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:11.790900   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:11.878337   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:11.987119   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:12.141130   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:12.290424   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:12.377844   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:12.640376   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:12.791383   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:12.876964   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:13.189503   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:13.290311   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:13.377807   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:13.641285   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:13.790517   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:13.877891   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:13.987714   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:14.140449   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:14.291024   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:14.378105   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:14.639788   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:14.790928   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:14.878117   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:15.141089   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:15.291491   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:15.380415   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:15.640205   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:15.790896   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:15.877961   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:16.141151   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:16.290447   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:16.378192   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:16.488690   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:16.646594   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:16.792529   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:16.877721   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:17.140675   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:17.289650   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:17.376746   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:17.641153   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:17.790340   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:17.879465   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:18.141437   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:18.290359   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:18.378938   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:18.640524   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:18.790656   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:18.877751   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:18.986359   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:19.149834   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:19.292731   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:19.377842   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:19.641049   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:19.790575   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:19.877582   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:20.141617   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:20.291749   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:20.378141   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:20.640984   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:20.791767   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:20.879559   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:20.986772   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:21.140539   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:21.291069   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:21.377464   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:21.640652   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:21.792323   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:21.878670   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:22.141248   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:22.290616   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:22.376864   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:22.641393   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:22.790412   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:22.878253   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:22.987010   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:23.141356   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:23.487504   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:23.490091   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:23.641355   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:23.790142   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:23.877803   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:24.141169   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:24.290610   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:24.376855   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:24.639653   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:24.789876   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:24.877829   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:25.140425   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:25.290673   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:25.377683   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:25.490099   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:25.640631   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:25.794052   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:25.878116   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:26.140412   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:26.291304   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:26.377696   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:26.640414   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:26.790699   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:26.877644   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:27.418126   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:27.420384   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:27.421097   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:27.643481   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:27.790547   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:27.888655   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:27.988750   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:28.140486   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:28.291220   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:28.385544   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:28.640746   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:28.791390   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:28.877663   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:29.141223   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:29.291310   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:29.380113   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:29.647933   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:29.792217   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:29.881418   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:30.141220   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:30.291343   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:30.377167   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:30.487613   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:30.639971   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:30.790285   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:30.876651   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:31.141066   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:31.290291   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:31.376956   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:31.648489   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:31.790861   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:31.878623   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:32.140447   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:32.290863   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:32.378550   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:32.495069   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:32.640902   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:32.790968   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:32.878870   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:33.141426   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:33.290874   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:33.377887   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:33.640768   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:33.791424   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:33.876624   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:34.145749   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:34.292080   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:34.377751   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:34.663499   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:34.791489   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:34.879404   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:34.986757   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:35.141070   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:35.290916   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:35.379037   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:35.641916   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:35.791734   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:35.877783   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:36.141381   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:36.290517   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:36.377155   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:36.640827   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:36.790673   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:36.877777   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:37.140627   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:37.290436   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:37.378829   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:37.486813   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:37.641257   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:37.790503   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:37.887156   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:38.140572   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:38.290700   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:38.377673   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:38.641654   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:38.791028   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:38.877256   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:39.141070   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:39.290055   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:39.377772   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:39.486910   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:39.640219   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:39.790385   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:39.877587   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:40.143175   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:40.292702   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:40.381640   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:40.640495   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:40.790741   22585 kapi.go:107] duration metric: took 1m13.504794123s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 17:14:40.877249   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:41.140746   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:41.377393   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:41.655015   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:41.877757   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:41.986472   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:42.141038   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:42.377489   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:42.640631   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:42.877603   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:43.141542   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:43.377722   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:43.640393   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:43.876744   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:43.987203   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:44.140797   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:44.376970   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:44.641095   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:44.877136   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:45.140187   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:45.376905   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:45.640785   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:45.878006   22585 kapi.go:107] duration metric: took 1m16.004325711s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 17:14:45.879791   22585 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-435911 cluster.
	I0717 17:14:45.881165   22585 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 17:14:45.882381   22585 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 17:14:46.142780   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:46.486972   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:46.640607   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:47.140932   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:47.640345   22585 kapi.go:107] duration metric: took 1m19.504917945s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 17:14:47.642128   22585 out.go:177] * Enabled addons: storage-provisioner, inspektor-gadget, ingress-dns, nvidia-device-plugin, helm-tiller, metrics-server, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0717 17:14:47.643397   22585 addons.go:510] duration metric: took 1m28.383848509s for enable addons: enabled=[storage-provisioner inspektor-gadget ingress-dns nvidia-device-plugin helm-tiller metrics-server cloud-spanner yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0717 17:14:48.567610   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:50.986936   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:53.487336   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:55.488764   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:57.986596   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:15:00.487301   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:15:02.487992   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:15:04.986282   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:15:05.987028   22585 pod_ready.go:92] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"True"
	I0717 17:15:05.987049   22585 pod_ready.go:81] duration metric: took 1m37.505973488s for pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace to be "Ready" ...
	I0717 17:15:05.987060   22585 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xst8q" in "kube-system" namespace to be "Ready" ...
	I0717 17:15:05.990897   22585 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-xst8q" in "kube-system" namespace has status "Ready":"True"
	I0717 17:15:05.990914   22585 pod_ready.go:81] duration metric: took 3.847877ms for pod "nvidia-device-plugin-daemonset-xst8q" in "kube-system" namespace to be "Ready" ...
	I0717 17:15:05.990930   22585 pod_ready.go:38] duration metric: took 1m38.691935933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 17:15:05.990947   22585 api_server.go:52] waiting for apiserver process to appear ...
	I0717 17:15:05.991001   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 17:15:05.991055   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 17:15:06.040532   22585 cri.go:89] found id: "fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0"
	I0717 17:15:06.040562   22585 cri.go:89] found id: ""
	I0717 17:15:06.040570   22585 logs.go:276] 1 containers: [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0]
	I0717 17:15:06.040632   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:06.044413   22585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 17:15:06.044470   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 17:15:06.079775   22585 cri.go:89] found id: "8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301"
	I0717 17:15:06.079800   22585 cri.go:89] found id: ""
	I0717 17:15:06.079808   22585 logs.go:276] 1 containers: [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301]
	I0717 17:15:06.079869   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:06.083396   22585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 17:15:06.083449   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 17:15:06.120734   22585 cri.go:89] found id: "65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3"
	I0717 17:15:06.120752   22585 cri.go:89] found id: ""
	I0717 17:15:06.120759   22585 logs.go:276] 1 containers: [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3]
	I0717 17:15:06.120801   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:06.124643   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 17:15:06.124711   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 17:15:06.169611   22585 cri.go:89] found id: "e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49"
	I0717 17:15:06.169631   22585 cri.go:89] found id: ""
	I0717 17:15:06.169640   22585 logs.go:276] 1 containers: [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49]
	I0717 17:15:06.169698   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:06.175354   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 17:15:06.175410   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 17:15:06.216006   22585 cri.go:89] found id: "e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e"
	I0717 17:15:06.216024   22585 cri.go:89] found id: ""
	I0717 17:15:06.216031   22585 logs.go:276] 1 containers: [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e]
	I0717 17:15:06.216073   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:06.220002   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 17:15:06.220057   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 17:15:06.257959   22585 cri.go:89] found id: "9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f"
	I0717 17:15:06.257978   22585 cri.go:89] found id: ""
	I0717 17:15:06.257985   22585 logs.go:276] 1 containers: [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f]
	I0717 17:15:06.258030   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:06.261743   22585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 17:15:06.261798   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 17:15:06.296423   22585 cri.go:89] found id: ""
	I0717 17:15:06.296451   22585 logs.go:276] 0 containers: []
	W0717 17:15:06.296462   22585 logs.go:278] No container was found matching "kindnet"
	I0717 17:15:06.296471   22585 logs.go:123] Gathering logs for kube-proxy [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e] ...
	I0717 17:15:06.296483   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e"
	I0717 17:15:06.333111   22585 logs.go:123] Gathering logs for CRI-O ...
	I0717 17:15:06.333144   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 17:15:07.365377   22585 logs.go:123] Gathering logs for kubelet ...
	I0717 17:15:07.365425   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 17:15:07.419259   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: W0717 17:13:25.707621    1283 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.419487   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.707649    1283 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.421391   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: W0717 17:13:25.961993    1283 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.421543   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.962025    1283 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.423601   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141315    1283 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.423754   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141436    1283 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.423891   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141548    1283 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.424078   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141601    1283 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	I0717 17:15:07.449610   22585 logs.go:123] Gathering logs for dmesg ...
	I0717 17:15:07.449645   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 17:15:07.464490   22585 logs.go:123] Gathering logs for describe nodes ...
	I0717 17:15:07.464519   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 17:15:07.588650   22585 logs.go:123] Gathering logs for kube-apiserver [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0] ...
	I0717 17:15:07.588681   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0"
	I0717 17:15:07.647970   22585 logs.go:123] Gathering logs for etcd [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301] ...
	I0717 17:15:07.648002   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301"
	I0717 17:15:07.718776   22585 logs.go:123] Gathering logs for coredns [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3] ...
	I0717 17:15:07.718811   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3"
	I0717 17:15:07.756847   22585 logs.go:123] Gathering logs for kube-scheduler [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49] ...
	I0717 17:15:07.756886   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49"
	I0717 17:15:07.808408   22585 logs.go:123] Gathering logs for kube-controller-manager [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f] ...
	I0717 17:15:07.808439   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f"
	I0717 17:15:07.865958   22585 logs.go:123] Gathering logs for container status ...
	I0717 17:15:07.865990   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 17:15:07.910488   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:15:07.910520   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 17:15:07.910587   22585 out.go:239] X Problems detected in kubelet:
	W0717 17:15:07.910599   22585 out.go:239]   Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.962025    1283 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.910613   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141315    1283 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.910625   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141436    1283 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.910639   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141548    1283 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.910650   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141601    1283 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	I0717 17:15:07.910660   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:15:07.910670   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:15:17.912372   22585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:15:17.946433   22585 api_server.go:72] duration metric: took 1m58.686913769s to wait for apiserver process to appear ...
	I0717 17:15:17.946462   22585 api_server.go:88] waiting for apiserver healthz status ...
	I0717 17:15:17.946498   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 17:15:17.946554   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 17:15:17.995751   22585 cri.go:89] found id: "fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0"
	I0717 17:15:17.995774   22585 cri.go:89] found id: ""
	I0717 17:15:17.995782   22585 logs.go:276] 1 containers: [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0]
	I0717 17:15:17.995835   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:18.000045   22585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 17:15:18.000108   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 17:15:18.052831   22585 cri.go:89] found id: "8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301"
	I0717 17:15:18.052857   22585 cri.go:89] found id: ""
	I0717 17:15:18.052867   22585 logs.go:276] 1 containers: [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301]
	I0717 17:15:18.052923   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:18.058072   22585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 17:15:18.058142   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 17:15:18.105473   22585 cri.go:89] found id: "65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3"
	I0717 17:15:18.105491   22585 cri.go:89] found id: ""
	I0717 17:15:18.105498   22585 logs.go:276] 1 containers: [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3]
	I0717 17:15:18.105542   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:18.109700   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 17:15:18.109777   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 17:15:18.152712   22585 cri.go:89] found id: "e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49"
	I0717 17:15:18.152735   22585 cri.go:89] found id: ""
	I0717 17:15:18.152743   22585 logs.go:276] 1 containers: [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49]
	I0717 17:15:18.152789   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:18.157009   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 17:15:18.157062   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 17:15:18.216897   22585 cri.go:89] found id: "e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e"
	I0717 17:15:18.216918   22585 cri.go:89] found id: ""
	I0717 17:15:18.216926   22585 logs.go:276] 1 containers: [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e]
	I0717 17:15:18.216989   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:18.221020   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 17:15:18.221081   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 17:15:18.269293   22585 cri.go:89] found id: "9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f"
	I0717 17:15:18.269320   22585 cri.go:89] found id: ""
	I0717 17:15:18.269330   22585 logs.go:276] 1 containers: [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f]
	I0717 17:15:18.269383   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:18.275239   22585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 17:15:18.275301   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 17:15:18.328171   22585 cri.go:89] found id: ""
	I0717 17:15:18.328199   22585 logs.go:276] 0 containers: []
	W0717 17:15:18.328208   22585 logs.go:278] No container was found matching "kindnet"
	I0717 17:15:18.328216   22585 logs.go:123] Gathering logs for describe nodes ...
	I0717 17:15:18.328230   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 17:15:18.477522   22585 logs.go:123] Gathering logs for coredns [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3] ...
	I0717 17:15:18.477555   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3"
	I0717 17:15:18.531422   22585 logs.go:123] Gathering logs for kube-proxy [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e] ...
	I0717 17:15:18.531453   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e"
	I0717 17:15:18.576084   22585 logs.go:123] Gathering logs for kube-controller-manager [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f] ...
	I0717 17:15:18.576112   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f"
	I0717 17:15:18.658322   22585 logs.go:123] Gathering logs for CRI-O ...
	I0717 17:15:18.658356   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 17:15:19.594678   22585 logs.go:123] Gathering logs for dmesg ...
	I0717 17:15:19.594713   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 17:15:19.608749   22585 logs.go:123] Gathering logs for kube-apiserver [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0] ...
	I0717 17:15:19.608778   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0"
	I0717 17:15:19.679044   22585 logs.go:123] Gathering logs for etcd [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301] ...
	I0717 17:15:19.679080   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301"
	I0717 17:15:19.787871   22585 logs.go:123] Gathering logs for kube-scheduler [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49] ...
	I0717 17:15:19.787899   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49"
	I0717 17:15:19.833819   22585 logs.go:123] Gathering logs for container status ...
	I0717 17:15:19.833844   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 17:15:19.883400   22585 logs.go:123] Gathering logs for kubelet ...
	I0717 17:15:19.883430   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 17:15:19.934522   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: W0717 17:13:25.707621    1283 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.934738   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.707649    1283 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.936554   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: W0717 17:13:25.961993    1283 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.936703   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.962025    1283 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.938699   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141315    1283 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.938853   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141436    1283 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.938987   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141548    1283 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.939141   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141601    1283 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	I0717 17:15:19.964558   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:15:19.964583   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 17:15:19.964631   22585 out.go:239] X Problems detected in kubelet:
	W0717 17:15:19.964642   22585 out.go:239]   Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.962025    1283 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.964654   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141315    1283 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.964667   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141436    1283 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.964676   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141548    1283 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.964682   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141601    1283 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	I0717 17:15:19.964688   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:15:19.964693   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:15:29.965835   22585 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8443/healthz ...
	I0717 17:15:29.972025   22585 api_server.go:279] https://192.168.39.27:8443/healthz returned 200:
	ok
	I0717 17:15:29.974152   22585 api_server.go:141] control plane version: v1.30.2
	I0717 17:15:29.974173   22585 api_server.go:131] duration metric: took 12.027705124s to wait for apiserver health ...
	I0717 17:15:29.974182   22585 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 17:15:29.974206   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 17:15:29.974254   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 17:15:30.011571   22585 cri.go:89] found id: "fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0"
	I0717 17:15:30.011602   22585 cri.go:89] found id: ""
	I0717 17:15:30.011611   22585 logs.go:276] 1 containers: [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0]
	I0717 17:15:30.011658   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:30.015694   22585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 17:15:30.015746   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 17:15:30.058475   22585 cri.go:89] found id: "8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301"
	I0717 17:15:30.058500   22585 cri.go:89] found id: ""
	I0717 17:15:30.058508   22585 logs.go:276] 1 containers: [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301]
	I0717 17:15:30.058560   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:30.062635   22585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 17:15:30.062699   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 17:15:30.100924   22585 cri.go:89] found id: "65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3"
	I0717 17:15:30.100960   22585 cri.go:89] found id: ""
	I0717 17:15:30.100970   22585 logs.go:276] 1 containers: [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3]
	I0717 17:15:30.101020   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:30.104842   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 17:15:30.104896   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 17:15:30.139813   22585 cri.go:89] found id: "e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49"
	I0717 17:15:30.139833   22585 cri.go:89] found id: ""
	I0717 17:15:30.139842   22585 logs.go:276] 1 containers: [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49]
	I0717 17:15:30.139891   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:30.143375   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 17:15:30.143420   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 17:15:30.183734   22585 cri.go:89] found id: "e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e"
	I0717 17:15:30.183760   22585 cri.go:89] found id: ""
	I0717 17:15:30.183770   22585 logs.go:276] 1 containers: [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e]
	I0717 17:15:30.183827   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:30.187742   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 17:15:30.187797   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 17:15:30.225009   22585 cri.go:89] found id: "9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f"
	I0717 17:15:30.225034   22585 cri.go:89] found id: ""
	I0717 17:15:30.225043   22585 logs.go:276] 1 containers: [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f]
	I0717 17:15:30.225097   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:30.229002   22585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 17:15:30.229074   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 17:15:30.264970   22585 cri.go:89] found id: ""
	I0717 17:15:30.264996   22585 logs.go:276] 0 containers: []
	W0717 17:15:30.265005   22585 logs.go:278] No container was found matching "kindnet"
	I0717 17:15:30.265015   22585 logs.go:123] Gathering logs for container status ...
	I0717 17:15:30.265029   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 17:15:30.316390   22585 logs.go:123] Gathering logs for kubelet ...
	I0717 17:15:30.316421   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 17:15:30.367990   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: W0717 17:13:25.707621    1283 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.368159   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.707649    1283 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.370077   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: W0717 17:13:25.961993    1283 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.370229   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.962025    1283 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.372199   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141315    1283 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.372348   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141436    1283 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.372482   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141548    1283 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.372632   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141601    1283 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	I0717 17:15:30.398511   22585 logs.go:123] Gathering logs for describe nodes ...
	I0717 17:15:30.398537   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 17:15:30.513573   22585 logs.go:123] Gathering logs for etcd [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301] ...
	I0717 17:15:30.513601   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301"
	I0717 17:15:30.578793   22585 logs.go:123] Gathering logs for kube-proxy [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e] ...
	I0717 17:15:30.578827   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e"
	I0717 17:15:30.615776   22585 logs.go:123] Gathering logs for kube-controller-manager [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f] ...
	I0717 17:15:30.615803   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f"
	I0717 17:15:30.681514   22585 logs.go:123] Gathering logs for CRI-O ...
	I0717 17:15:30.681552   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 17:15:31.549350   22585 logs.go:123] Gathering logs for dmesg ...
	I0717 17:15:31.549393   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 17:15:31.563477   22585 logs.go:123] Gathering logs for kube-apiserver [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0] ...
	I0717 17:15:31.563504   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0"
	I0717 17:15:31.613144   22585 logs.go:123] Gathering logs for coredns [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3] ...
	I0717 17:15:31.613170   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3"
	I0717 17:15:31.647893   22585 logs.go:123] Gathering logs for kube-scheduler [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49] ...
	I0717 17:15:31.647921   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49"
	I0717 17:15:31.686036   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:15:31.686062   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 17:15:31.686113   22585 out.go:239] X Problems detected in kubelet:
	W0717 17:15:31.686122   22585 out.go:239]   Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.962025    1283 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:31.686132   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141315    1283 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:31.686143   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141436    1283 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:31.686149   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141548    1283 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:31.686158   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141601    1283 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	I0717 17:15:31.686164   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:15:31.686172   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:15:41.699123   22585 system_pods.go:59] 18 kube-system pods found
	I0717 17:15:41.699153   22585 system_pods.go:61] "coredns-7db6d8ff4d-ktksd" [68b98670-2ada-403b-9f7f-a712b7a3ace4] Running
	I0717 17:15:41.699157   22585 system_pods.go:61] "csi-hostpath-attacher-0" [72a7a273-f40b-4503-a6f4-00ff9385aeda] Running
	I0717 17:15:41.699161   22585 system_pods.go:61] "csi-hostpath-resizer-0" [e50d25c5-3dad-4b92-ba5b-1e5458ec91a1] Running
	I0717 17:15:41.699165   22585 system_pods.go:61] "csi-hostpathplugin-nnchn" [4379d8e7-b277-4b17-968f-98ee1a746757] Running
	I0717 17:15:41.699167   22585 system_pods.go:61] "etcd-addons-435911" [b91aac8f-3bf7-4acd-aa81-40cee5dcb0f4] Running
	I0717 17:15:41.699170   22585 system_pods.go:61] "kube-apiserver-addons-435911" [31459445-84ba-4687-b7d1-996c53960592] Running
	I0717 17:15:41.699173   22585 system_pods.go:61] "kube-controller-manager-addons-435911" [36229cb2-73ea-4d6d-8d4f-d43b8b91fcd2] Running
	I0717 17:15:41.699178   22585 system_pods.go:61] "kube-ingress-dns-minikube" [5ba15390-d48e-46dd-a033-94fc60c42981] Running
	I0717 17:15:41.699181   22585 system_pods.go:61] "kube-proxy-s2kxf" [3739bf30-2198-42bf-a1c6-c53e9bbfe970] Running
	I0717 17:15:41.699184   22585 system_pods.go:61] "kube-scheduler-addons-435911" [35d4b1a8-5360-448f-887f-073e3ae0301d] Running
	I0717 17:15:41.699187   22585 system_pods.go:61] "metrics-server-c59844bb4-qfn6h" [594c6a3c-368e-421e-9d3f-ceb3426c0cf7] Running
	I0717 17:15:41.699190   22585 system_pods.go:61] "nvidia-device-plugin-daemonset-xst8q" [a0449eb2-9a20-4b3a-b414-1a8ca2c38090] Running
	I0717 17:15:41.699192   22585 system_pods.go:61] "registry-656c9c8d9c-k8vqb" [b2c62d08-0816-405d-b5e4-78e70611f29b] Running
	I0717 17:15:41.699197   22585 system_pods.go:61] "registry-proxy-qxnzl" [a6c49b2c-06f8-4825-b8b7-d2233c0cb798] Running
	I0717 17:15:41.699201   22585 system_pods.go:61] "snapshot-controller-745499f584-j5jh5" [55e87176-4e97-4953-b593-ecae177e3403] Running
	I0717 17:15:41.699205   22585 system_pods.go:61] "snapshot-controller-745499f584-ppvbb" [68b3d0a0-cba2-4f65-9487-adf50c36096f] Running
	I0717 17:15:41.699208   22585 system_pods.go:61] "storage-provisioner" [055c9722-8252-48a5-9048-7fcbc3cf7a2b] Running
	I0717 17:15:41.699211   22585 system_pods.go:61] "tiller-deploy-6677d64bcd-4vwq8" [bb7ff47b-ce42-448a-bc9b-96324fdaac73] Running
	I0717 17:15:41.699216   22585 system_pods.go:74] duration metric: took 11.725028942s to wait for pod list to return data ...
	I0717 17:15:41.699226   22585 default_sa.go:34] waiting for default service account to be created ...
	I0717 17:15:41.701409   22585 default_sa.go:45] found service account: "default"
	I0717 17:15:41.701427   22585 default_sa.go:55] duration metric: took 2.195384ms for default service account to be created ...
	I0717 17:15:41.701434   22585 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 17:15:41.711246   22585 system_pods.go:86] 18 kube-system pods found
	I0717 17:15:41.711276   22585 system_pods.go:89] "coredns-7db6d8ff4d-ktksd" [68b98670-2ada-403b-9f7f-a712b7a3ace4] Running
	I0717 17:15:41.711281   22585 system_pods.go:89] "csi-hostpath-attacher-0" [72a7a273-f40b-4503-a6f4-00ff9385aeda] Running
	I0717 17:15:41.711286   22585 system_pods.go:89] "csi-hostpath-resizer-0" [e50d25c5-3dad-4b92-ba5b-1e5458ec91a1] Running
	I0717 17:15:41.711290   22585 system_pods.go:89] "csi-hostpathplugin-nnchn" [4379d8e7-b277-4b17-968f-98ee1a746757] Running
	I0717 17:15:41.711294   22585 system_pods.go:89] "etcd-addons-435911" [b91aac8f-3bf7-4acd-aa81-40cee5dcb0f4] Running
	I0717 17:15:41.711298   22585 system_pods.go:89] "kube-apiserver-addons-435911" [31459445-84ba-4687-b7d1-996c53960592] Running
	I0717 17:15:41.711304   22585 system_pods.go:89] "kube-controller-manager-addons-435911" [36229cb2-73ea-4d6d-8d4f-d43b8b91fcd2] Running
	I0717 17:15:41.711309   22585 system_pods.go:89] "kube-ingress-dns-minikube" [5ba15390-d48e-46dd-a033-94fc60c42981] Running
	I0717 17:15:41.711313   22585 system_pods.go:89] "kube-proxy-s2kxf" [3739bf30-2198-42bf-a1c6-c53e9bbfe970] Running
	I0717 17:15:41.711317   22585 system_pods.go:89] "kube-scheduler-addons-435911" [35d4b1a8-5360-448f-887f-073e3ae0301d] Running
	I0717 17:15:41.711321   22585 system_pods.go:89] "metrics-server-c59844bb4-qfn6h" [594c6a3c-368e-421e-9d3f-ceb3426c0cf7] Running
	I0717 17:15:41.711326   22585 system_pods.go:89] "nvidia-device-plugin-daemonset-xst8q" [a0449eb2-9a20-4b3a-b414-1a8ca2c38090] Running
	I0717 17:15:41.711330   22585 system_pods.go:89] "registry-656c9c8d9c-k8vqb" [b2c62d08-0816-405d-b5e4-78e70611f29b] Running
	I0717 17:15:41.711336   22585 system_pods.go:89] "registry-proxy-qxnzl" [a6c49b2c-06f8-4825-b8b7-d2233c0cb798] Running
	I0717 17:15:41.711339   22585 system_pods.go:89] "snapshot-controller-745499f584-j5jh5" [55e87176-4e97-4953-b593-ecae177e3403] Running
	I0717 17:15:41.711345   22585 system_pods.go:89] "snapshot-controller-745499f584-ppvbb" [68b3d0a0-cba2-4f65-9487-adf50c36096f] Running
	I0717 17:15:41.711349   22585 system_pods.go:89] "storage-provisioner" [055c9722-8252-48a5-9048-7fcbc3cf7a2b] Running
	I0717 17:15:41.711355   22585 system_pods.go:89] "tiller-deploy-6677d64bcd-4vwq8" [bb7ff47b-ce42-448a-bc9b-96324fdaac73] Running
	I0717 17:15:41.711362   22585 system_pods.go:126] duration metric: took 9.922561ms to wait for k8s-apps to be running ...
	I0717 17:15:41.711368   22585 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 17:15:41.711412   22585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:15:41.729954   22585 system_svc.go:56] duration metric: took 18.574398ms WaitForService to wait for kubelet
	I0717 17:15:41.729987   22585 kubeadm.go:582] duration metric: took 2m22.470473505s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 17:15:41.730013   22585 node_conditions.go:102] verifying NodePressure condition ...
	I0717 17:15:41.732689   22585 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 17:15:41.732714   22585 node_conditions.go:123] node cpu capacity is 2
	I0717 17:15:41.732726   22585 node_conditions.go:105] duration metric: took 2.707848ms to run NodePressure ...
	I0717 17:15:41.732736   22585 start.go:241] waiting for startup goroutines ...
	I0717 17:15:41.732744   22585 start.go:246] waiting for cluster config update ...
	I0717 17:15:41.732757   22585 start.go:255] writing updated cluster config ...
	I0717 17:15:41.733021   22585 ssh_runner.go:195] Run: rm -f paused
	I0717 17:15:41.779839   22585 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 17:15:41.782451   22585 out.go:177] * Done! kubectl is now configured to use "addons-435911" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.825122968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721236728825089482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e66e525c-bcf9-489a-bd1a-3dfcbcca54e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.825829599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b235f700-65f6-4ca1-b5ec-9243091dd562 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.825884656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b235f700-65f6-4ca1-b5ec-9243091dd562 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.826505898Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b0a23ffb0e78b96159f53785471db113a79268302f69933e426f918beb14167,PodSandboxId:f3df555924b34d68e8ec7f6d1678e96c200a5066cdd0717517bd08ff82f13861,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721236721496045662,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-sn68h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bd855e9-5ad2-4b53-a4b8-81a2548d80be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b206c22,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81ad57bca11afa9ff4ef8c1f48f60f8aa0a5a938b76a0107c155ef833003f82,PodSandboxId:d3a7397c62339211450604403f550b91ccc713a8b3f06df26a76033e7365def5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721236581473502391,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c68e6dcb-da12-4d99-a5b7-eb687873f149,},Annotations:map[string]string{io.kubernet
es.container.hash: df200387,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2d1134191e675f19f3922068968108787bd78c032c106a32fc420cb773502,PodSandboxId:ef6dac799f02266d26924865012567cf27959da4f507249d51ca4396c25bcfb6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721236558550805305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-znd2v,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 46cfb6c7-3a68-411b-968e-8ab21c2226ff,},Annotations:map[string]string{io.kubernetes.container.hash: d0a9f3af,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa919d6ecaffe5a059fd1f624e32a8769ad52beed2e788f61a7207d198bfdbf3,PodSandboxId:5231a839fcb4f18f8df454be7f85541f20b3020e0e5d798c1bdb219b73d7f72c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721236484815341994,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fn48r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2a4fbcb0-0e68-4190-b1fa-e95a9ae93945,},Annotations:map[string]string{io.kubernetes.container.hash: 6deb9d4b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b8492e809b2439fd7a5347d6a340978a4c1da6c72a97bdb76641bd2b13b3ed,PodSandboxId:c2b242a849c64f3a639785f0405c277d881ce43f7268800b936e29526a22098e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721236467660088359,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gcrz5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 93896a00-33ce-4684-8a3e-e27f3b4f025a,},Annotations:map[string]string{io.kubernetes.container.hash: 24f4b07a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb21978ad8eede94160b2e7ea3617aa15fea3499577c353e5b80a2c3bab42f9,PodSandboxId:ccf28d7583f67e24a5337544f93b3cef762cefbbab121b6299afe35846bef1d9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721236465662355759,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4cxz6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 956fbce6-bf17-4d3c-af0d-c5e16a8b9064,},Annotations:map[string]string{io.kubernetes.container.hash: 3d855f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343cf42df006c62fc492f1c30b65e3803b40602bd440e4d79e1758f66954a677,PodSandboxId:fd99af3c2f91f2c6ff39c1f834be84984049cdbe34e2f8ab393543c00b958c1c,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf3
1e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721236460169056438,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-gj64l,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d75d651e-dc3f-4ea9-b380-f7637ab4ce97,},Annotations:map[string]string{io.kubernetes.container.hash: 5747c94d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881db15d7669e577c561397f470be0e4d6cff2c4e7dfae4a371fd85ddd50cada,PodSandboxId:c854a60739bf5901594c3264b49a036bc306e4d4aac406f42327194eec892deb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96d
e79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721236448232173244,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-blrqx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a9ebaa8e-4472-4135-822c-5fd806eb7fb6,},Annotations:map[string]string{io.kubernetes.container.hash: e39f8ab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc62acb56fc72aae8ad55516ee25f47058ffdbabc3179ce3b5922975c55be40e,PodSandboxId:c3671dfdd359cd62f93771eb79e9dc4cbf1ef3fc0f0172b5004f65065d2f9330,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server
/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721236439589292704,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qfn6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594c6a3c-368e-421e-9d3f-ceb3426c0cf7,},Annotations:map[string]string{io.kubernetes.container.hash: 94f689a6,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a721d6e9c61620875bf344ec13670996a8189bfa2f61fbb74a2396a22c8419f,PodSandboxId:8df7bee35d3e05d9bcbd945f6c85c727381152
8beef3f76175a6057d51b5161e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721236404696134980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055c9722-8252-48a5-9048-7fcbc3cf7a2b,},Annotations:map[string]string{io.kubernetes.container.hash: 42746b27,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3,PodSandboxId:92d703072e50ff312ea100bd9386e950decf2d6f218d271550
40ffeb86309ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721236402159172231,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ktksd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b98670-2ada-403b-9f7f-a712b7a3ace4,},Annotations:map[string]string{io.kubernetes.container.hash: 33e9fd0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e,PodSandboxId:6c5b966fad82bfc4f39fd7358f96dd446e9416a76e70d16a4da10f3a887a8715,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721236399847230286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2kxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3739bf30-2198-42bf-a1c6-c53e9bbfe970,},Annotations:map[string]string{io.kubernetes.container.hash: 7216d3fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49,PodSandboxId:e075b49efeab91f29141421bac3be5c5e8305e7d89716ebf3d53cd454bd4efee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721236380462267709,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074093c21d39c7941f7e4c1e5b68a75b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301,PodSandboxId:6d70065e627bc328c607bf5304d02f1c86f5163ef67b267615e96123eb22ec70,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721236380403800762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94f24a073ac9cce58506fe4709d9ed1,},Annotations:map[string]string{io.kubernetes.container.hash: 21f309f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5a18c9713d21755550de03fc5f414
4e1fbe17961c2b4edbeef1640383974d0,PodSandboxId:f26b3799bdb11db73e72f6f774ac299128453bef930874d75bb0a3d0a1236864,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721236380336231830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0390e02e778f8620cd2833d7adc79023,},Annotations:map[string]string{io.kubernetes.container.hash: b74cd706,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9978a55587a895e12fb0d591b73c90758af5fdac4042f39a
1d1c5dac70ecf06f,PodSandboxId:98a01a0664d4dff8283fd820de1ab183be1f30162756b655c3d7e5b383f2ac96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721236380315489334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef80a4a983e4af3963c62d6367bb65c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b23
5f700-65f6-4ca1-b5ec-9243091dd562 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.865530102Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7bd8a1d9-663f-4948-b478-a328db2b48ea name=/runtime.v1.RuntimeService/Version
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.865612020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7bd8a1d9-663f-4948-b478-a328db2b48ea name=/runtime.v1.RuntimeService/Version
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.867506785Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=949bb670-6b32-4011-90c0-1c9d4d626ea9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.868848934Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721236728868821333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=949bb670-6b32-4011-90c0-1c9d4d626ea9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.869392133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8239953e-2eaf-4889-bb3f-e5c179b08a65 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.869506391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8239953e-2eaf-4889-bb3f-e5c179b08a65 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.869816184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b0a23ffb0e78b96159f53785471db113a79268302f69933e426f918beb14167,PodSandboxId:f3df555924b34d68e8ec7f6d1678e96c200a5066cdd0717517bd08ff82f13861,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721236721496045662,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-sn68h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bd855e9-5ad2-4b53-a4b8-81a2548d80be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b206c22,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81ad57bca11afa9ff4ef8c1f48f60f8aa0a5a938b76a0107c155ef833003f82,PodSandboxId:d3a7397c62339211450604403f550b91ccc713a8b3f06df26a76033e7365def5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721236581473502391,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c68e6dcb-da12-4d99-a5b7-eb687873f149,},Annotations:map[string]string{io.kubernet
es.container.hash: df200387,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2d1134191e675f19f3922068968108787bd78c032c106a32fc420cb773502,PodSandboxId:ef6dac799f02266d26924865012567cf27959da4f507249d51ca4396c25bcfb6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721236558550805305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-znd2v,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 46cfb6c7-3a68-411b-968e-8ab21c2226ff,},Annotations:map[string]string{io.kubernetes.container.hash: d0a9f3af,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa919d6ecaffe5a059fd1f624e32a8769ad52beed2e788f61a7207d198bfdbf3,PodSandboxId:5231a839fcb4f18f8df454be7f85541f20b3020e0e5d798c1bdb219b73d7f72c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721236484815341994,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fn48r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2a4fbcb0-0e68-4190-b1fa-e95a9ae93945,},Annotations:map[string]string{io.kubernetes.container.hash: 6deb9d4b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b8492e809b2439fd7a5347d6a340978a4c1da6c72a97bdb76641bd2b13b3ed,PodSandboxId:c2b242a849c64f3a639785f0405c277d881ce43f7268800b936e29526a22098e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721236467660088359,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gcrz5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 93896a00-33ce-4684-8a3e-e27f3b4f025a,},Annotations:map[string]string{io.kubernetes.container.hash: 24f4b07a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb21978ad8eede94160b2e7ea3617aa15fea3499577c353e5b80a2c3bab42f9,PodSandboxId:ccf28d7583f67e24a5337544f93b3cef762cefbbab121b6299afe35846bef1d9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721236465662355759,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4cxz6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 956fbce6-bf17-4d3c-af0d-c5e16a8b9064,},Annotations:map[string]string{io.kubernetes.container.hash: 3d855f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343cf42df006c62fc492f1c30b65e3803b40602bd440e4d79e1758f66954a677,PodSandboxId:fd99af3c2f91f2c6ff39c1f834be84984049cdbe34e2f8ab393543c00b958c1c,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf3
1e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721236460169056438,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-gj64l,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d75d651e-dc3f-4ea9-b380-f7637ab4ce97,},Annotations:map[string]string{io.kubernetes.container.hash: 5747c94d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881db15d7669e577c561397f470be0e4d6cff2c4e7dfae4a371fd85ddd50cada,PodSandboxId:c854a60739bf5901594c3264b49a036bc306e4d4aac406f42327194eec892deb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96d
e79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721236448232173244,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-blrqx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a9ebaa8e-4472-4135-822c-5fd806eb7fb6,},Annotations:map[string]string{io.kubernetes.container.hash: e39f8ab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc62acb56fc72aae8ad55516ee25f47058ffdbabc3179ce3b5922975c55be40e,PodSandboxId:c3671dfdd359cd62f93771eb79e9dc4cbf1ef3fc0f0172b5004f65065d2f9330,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server
/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721236439589292704,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qfn6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594c6a3c-368e-421e-9d3f-ceb3426c0cf7,},Annotations:map[string]string{io.kubernetes.container.hash: 94f689a6,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a721d6e9c61620875bf344ec13670996a8189bfa2f61fbb74a2396a22c8419f,PodSandboxId:8df7bee35d3e05d9bcbd945f6c85c727381152
8beef3f76175a6057d51b5161e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721236404696134980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055c9722-8252-48a5-9048-7fcbc3cf7a2b,},Annotations:map[string]string{io.kubernetes.container.hash: 42746b27,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3,PodSandboxId:92d703072e50ff312ea100bd9386e950decf2d6f218d271550
40ffeb86309ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721236402159172231,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ktksd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b98670-2ada-403b-9f7f-a712b7a3ace4,},Annotations:map[string]string{io.kubernetes.container.hash: 33e9fd0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e,PodSandboxId:6c5b966fad82bfc4f39fd7358f96dd446e9416a76e70d16a4da10f3a887a8715,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721236399847230286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2kxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3739bf30-2198-42bf-a1c6-c53e9bbfe970,},Annotations:map[string]string{io.kubernetes.container.hash: 7216d3fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49,PodSandboxId:e075b49efeab91f29141421bac3be5c5e8305e7d89716ebf3d53cd454bd4efee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721236380462267709,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074093c21d39c7941f7e4c1e5b68a75b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301,PodSandboxId:6d70065e627bc328c607bf5304d02f1c86f5163ef67b267615e96123eb22ec70,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721236380403800762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94f24a073ac9cce58506fe4709d9ed1,},Annotations:map[string]string{io.kubernetes.container.hash: 21f309f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5a18c9713d21755550de03fc5f414
4e1fbe17961c2b4edbeef1640383974d0,PodSandboxId:f26b3799bdb11db73e72f6f774ac299128453bef930874d75bb0a3d0a1236864,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721236380336231830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0390e02e778f8620cd2833d7adc79023,},Annotations:map[string]string{io.kubernetes.container.hash: b74cd706,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9978a55587a895e12fb0d591b73c90758af5fdac4042f39a
1d1c5dac70ecf06f,PodSandboxId:98a01a0664d4dff8283fd820de1ab183be1f30162756b655c3d7e5b383f2ac96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721236380315489334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef80a4a983e4af3963c62d6367bb65c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=823
9953e-2eaf-4889-bb3f-e5c179b08a65 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.900621376Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c633a2db-0fc7-4a0a-805d-a852b440d64e name=/runtime.v1.RuntimeService/Version
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.900715806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c633a2db-0fc7-4a0a-805d-a852b440d64e name=/runtime.v1.RuntimeService/Version
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.901473586Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d56e4c05-983a-4505-85b2-dffb4dd789c1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.902729116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721236728902705310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d56e4c05-983a-4505-85b2-dffb4dd789c1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.903233156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b48b360-5bdb-4182-bec5-d668640555fc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.903289865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b48b360-5bdb-4182-bec5-d668640555fc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.903848702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b0a23ffb0e78b96159f53785471db113a79268302f69933e426f918beb14167,PodSandboxId:f3df555924b34d68e8ec7f6d1678e96c200a5066cdd0717517bd08ff82f13861,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721236721496045662,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-sn68h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bd855e9-5ad2-4b53-a4b8-81a2548d80be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b206c22,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81ad57bca11afa9ff4ef8c1f48f60f8aa0a5a938b76a0107c155ef833003f82,PodSandboxId:d3a7397c62339211450604403f550b91ccc713a8b3f06df26a76033e7365def5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721236581473502391,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c68e6dcb-da12-4d99-a5b7-eb687873f149,},Annotations:map[string]string{io.kubernet
es.container.hash: df200387,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2d1134191e675f19f3922068968108787bd78c032c106a32fc420cb773502,PodSandboxId:ef6dac799f02266d26924865012567cf27959da4f507249d51ca4396c25bcfb6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721236558550805305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-znd2v,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 46cfb6c7-3a68-411b-968e-8ab21c2226ff,},Annotations:map[string]string{io.kubernetes.container.hash: d0a9f3af,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa919d6ecaffe5a059fd1f624e32a8769ad52beed2e788f61a7207d198bfdbf3,PodSandboxId:5231a839fcb4f18f8df454be7f85541f20b3020e0e5d798c1bdb219b73d7f72c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721236484815341994,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fn48r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2a4fbcb0-0e68-4190-b1fa-e95a9ae93945,},Annotations:map[string]string{io.kubernetes.container.hash: 6deb9d4b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b8492e809b2439fd7a5347d6a340978a4c1da6c72a97bdb76641bd2b13b3ed,PodSandboxId:c2b242a849c64f3a639785f0405c277d881ce43f7268800b936e29526a22098e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721236467660088359,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gcrz5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 93896a00-33ce-4684-8a3e-e27f3b4f025a,},Annotations:map[string]string{io.kubernetes.container.hash: 24f4b07a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb21978ad8eede94160b2e7ea3617aa15fea3499577c353e5b80a2c3bab42f9,PodSandboxId:ccf28d7583f67e24a5337544f93b3cef762cefbbab121b6299afe35846bef1d9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721236465662355759,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4cxz6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 956fbce6-bf17-4d3c-af0d-c5e16a8b9064,},Annotations:map[string]string{io.kubernetes.container.hash: 3d855f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343cf42df006c62fc492f1c30b65e3803b40602bd440e4d79e1758f66954a677,PodSandboxId:fd99af3c2f91f2c6ff39c1f834be84984049cdbe34e2f8ab393543c00b958c1c,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf3
1e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721236460169056438,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-gj64l,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d75d651e-dc3f-4ea9-b380-f7637ab4ce97,},Annotations:map[string]string{io.kubernetes.container.hash: 5747c94d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881db15d7669e577c561397f470be0e4d6cff2c4e7dfae4a371fd85ddd50cada,PodSandboxId:c854a60739bf5901594c3264b49a036bc306e4d4aac406f42327194eec892deb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96d
e79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721236448232173244,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-blrqx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a9ebaa8e-4472-4135-822c-5fd806eb7fb6,},Annotations:map[string]string{io.kubernetes.container.hash: e39f8ab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc62acb56fc72aae8ad55516ee25f47058ffdbabc3179ce3b5922975c55be40e,PodSandboxId:c3671dfdd359cd62f93771eb79e9dc4cbf1ef3fc0f0172b5004f65065d2f9330,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server
/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721236439589292704,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qfn6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594c6a3c-368e-421e-9d3f-ceb3426c0cf7,},Annotations:map[string]string{io.kubernetes.container.hash: 94f689a6,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a721d6e9c61620875bf344ec13670996a8189bfa2f61fbb74a2396a22c8419f,PodSandboxId:8df7bee35d3e05d9bcbd945f6c85c727381152
8beef3f76175a6057d51b5161e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721236404696134980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055c9722-8252-48a5-9048-7fcbc3cf7a2b,},Annotations:map[string]string{io.kubernetes.container.hash: 42746b27,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3,PodSandboxId:92d703072e50ff312ea100bd9386e950decf2d6f218d271550
40ffeb86309ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721236402159172231,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ktksd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b98670-2ada-403b-9f7f-a712b7a3ace4,},Annotations:map[string]string{io.kubernetes.container.hash: 33e9fd0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e,PodSandboxId:6c5b966fad82bfc4f39fd7358f96dd446e9416a76e70d16a4da10f3a887a8715,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721236399847230286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2kxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3739bf30-2198-42bf-a1c6-c53e9bbfe970,},Annotations:map[string]string{io.kubernetes.container.hash: 7216d3fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49,PodSandboxId:e075b49efeab91f29141421bac3be5c5e8305e7d89716ebf3d53cd454bd4efee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721236380462267709,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074093c21d39c7941f7e4c1e5b68a75b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301,PodSandboxId:6d70065e627bc328c607bf5304d02f1c86f5163ef67b267615e96123eb22ec70,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721236380403800762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94f24a073ac9cce58506fe4709d9ed1,},Annotations:map[string]string{io.kubernetes.container.hash: 21f309f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5a18c9713d21755550de03fc5f414
4e1fbe17961c2b4edbeef1640383974d0,PodSandboxId:f26b3799bdb11db73e72f6f774ac299128453bef930874d75bb0a3d0a1236864,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721236380336231830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0390e02e778f8620cd2833d7adc79023,},Annotations:map[string]string{io.kubernetes.container.hash: b74cd706,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9978a55587a895e12fb0d591b73c90758af5fdac4042f39a
1d1c5dac70ecf06f,PodSandboxId:98a01a0664d4dff8283fd820de1ab183be1f30162756b655c3d7e5b383f2ac96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721236380315489334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef80a4a983e4af3963c62d6367bb65c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b4
8b360-5bdb-4182-bec5-d668640555fc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.943161898Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3a89896-46c2-4667-bf40-f0526d6227d1 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.943254514Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3a89896-46c2-4667-bf40-f0526d6227d1 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.944569353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cf37431-8ee2-40d5-be1b-e8e2d6462d10 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.945880792Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721236728945854694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cf37431-8ee2-40d5-be1b-e8e2d6462d10 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.946515100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c03e72f9-c08b-4d05-8d13-868c449a8d90 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.946584525Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c03e72f9-c08b-4d05-8d13-868c449a8d90 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:18:48 addons-435911 crio[686]: time="2024-07-17 17:18:48.946933829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b0a23ffb0e78b96159f53785471db113a79268302f69933e426f918beb14167,PodSandboxId:f3df555924b34d68e8ec7f6d1678e96c200a5066cdd0717517bd08ff82f13861,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721236721496045662,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-sn68h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bd855e9-5ad2-4b53-a4b8-81a2548d80be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b206c22,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81ad57bca11afa9ff4ef8c1f48f60f8aa0a5a938b76a0107c155ef833003f82,PodSandboxId:d3a7397c62339211450604403f550b91ccc713a8b3f06df26a76033e7365def5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721236581473502391,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c68e6dcb-da12-4d99-a5b7-eb687873f149,},Annotations:map[string]string{io.kubernet
es.container.hash: df200387,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2d1134191e675f19f3922068968108787bd78c032c106a32fc420cb773502,PodSandboxId:ef6dac799f02266d26924865012567cf27959da4f507249d51ca4396c25bcfb6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721236558550805305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-znd2v,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 46cfb6c7-3a68-411b-968e-8ab21c2226ff,},Annotations:map[string]string{io.kubernetes.container.hash: d0a9f3af,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa919d6ecaffe5a059fd1f624e32a8769ad52beed2e788f61a7207d198bfdbf3,PodSandboxId:5231a839fcb4f18f8df454be7f85541f20b3020e0e5d798c1bdb219b73d7f72c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721236484815341994,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fn48r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2a4fbcb0-0e68-4190-b1fa-e95a9ae93945,},Annotations:map[string]string{io.kubernetes.container.hash: 6deb9d4b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48b8492e809b2439fd7a5347d6a340978a4c1da6c72a97bdb76641bd2b13b3ed,PodSandboxId:c2b242a849c64f3a639785f0405c277d881ce43f7268800b936e29526a22098e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAIN
ER_EXITED,CreatedAt:1721236467660088359,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gcrz5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 93896a00-33ce-4684-8a3e-e27f3b4f025a,},Annotations:map[string]string{io.kubernetes.container.hash: 24f4b07a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb21978ad8eede94160b2e7ea3617aa15fea3499577c353e5b80a2c3bab42f9,PodSandboxId:ccf28d7583f67e24a5337544f93b3cef762cefbbab121b6299afe35846bef1d9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f61
75e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1721236465662355759,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4cxz6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 956fbce6-bf17-4d3c-af0d-c5e16a8b9064,},Annotations:map[string]string{io.kubernetes.container.hash: 3d855f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343cf42df006c62fc492f1c30b65e3803b40602bd440e4d79e1758f66954a677,PodSandboxId:fd99af3c2f91f2c6ff39c1f834be84984049cdbe34e2f8ab393543c00b958c1c,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf3
1e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721236460169056438,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-gj64l,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d75d651e-dc3f-4ea9-b380-f7637ab4ce97,},Annotations:map[string]string{io.kubernetes.container.hash: 5747c94d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881db15d7669e577c561397f470be0e4d6cff2c4e7dfae4a371fd85ddd50cada,PodSandboxId:c854a60739bf5901594c3264b49a036bc306e4d4aac406f42327194eec892deb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96d
e79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721236448232173244,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-blrqx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a9ebaa8e-4472-4135-822c-5fd806eb7fb6,},Annotations:map[string]string{io.kubernetes.container.hash: e39f8ab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc62acb56fc72aae8ad55516ee25f47058ffdbabc3179ce3b5922975c55be40e,PodSandboxId:c3671dfdd359cd62f93771eb79e9dc4cbf1ef3fc0f0172b5004f65065d2f9330,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server
/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721236439589292704,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qfn6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594c6a3c-368e-421e-9d3f-ceb3426c0cf7,},Annotations:map[string]string{io.kubernetes.container.hash: 94f689a6,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a721d6e9c61620875bf344ec13670996a8189bfa2f61fbb74a2396a22c8419f,PodSandboxId:8df7bee35d3e05d9bcbd945f6c85c727381152
8beef3f76175a6057d51b5161e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721236404696134980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055c9722-8252-48a5-9048-7fcbc3cf7a2b,},Annotations:map[string]string{io.kubernetes.container.hash: 42746b27,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3,PodSandboxId:92d703072e50ff312ea100bd9386e950decf2d6f218d271550
40ffeb86309ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721236402159172231,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ktksd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b98670-2ada-403b-9f7f-a712b7a3ace4,},Annotations:map[string]string{io.kubernetes.container.hash: 33e9fd0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e,PodSandboxId:6c5b966fad82bfc4f39fd7358f96dd446e9416a76e70d16a4da10f3a887a8715,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721236399847230286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2kxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3739bf30-2198-42bf-a1c6-c53e9bbfe970,},Annotations:map[string]string{io.kubernetes.container.hash: 7216d3fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49,PodSandboxId:e075b49efeab91f29141421bac3be5c5e8305e7d89716ebf3d53cd454bd4efee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721236380462267709,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074093c21d39c7941f7e4c1e5b68a75b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301,PodSandboxId:6d70065e627bc328c607bf5304d02f1c86f5163ef67b267615e96123eb22ec70,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721236380403800762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94f24a073ac9cce58506fe4709d9ed1,},Annotations:map[string]string{io.kubernetes.container.hash: 21f309f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5a18c9713d21755550de03fc5f414
4e1fbe17961c2b4edbeef1640383974d0,PodSandboxId:f26b3799bdb11db73e72f6f774ac299128453bef930874d75bb0a3d0a1236864,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721236380336231830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0390e02e778f8620cd2833d7adc79023,},Annotations:map[string]string{io.kubernetes.container.hash: b74cd706,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9978a55587a895e12fb0d591b73c90758af5fdac4042f39a
1d1c5dac70ecf06f,PodSandboxId:98a01a0664d4dff8283fd820de1ab183be1f30162756b655c3d7e5b383f2ac96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721236380315489334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef80a4a983e4af3963c62d6367bb65c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c03
e72f9-c08b-4d05-8d13-868c449a8d90 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0b0a23ffb0e78       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   f3df555924b34       hello-world-app-6778b5fc9f-sn68h
	a81ad57bca11a       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                              2 minutes ago       Running             nginx                     0                   d3a7397c62339       nginx
	8bd2d1134191e       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        2 minutes ago       Running             headlamp                  0                   ef6dac799f022       headlamp-7867546754-znd2v
	aa919d6ecaffe       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 4 minutes ago       Running             gcp-auth                  0                   5231a839fcb4f       gcp-auth-5db96cd9b4-fn48r
	48b8492e809b2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              patch                     0                   c2b242a849c64       ingress-nginx-admission-patch-gcrz5
	1eb21978ad8ee       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   ccf28d7583f67       ingress-nginx-admission-create-4cxz6
	343cf42df006c       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                              4 minutes ago       Running             yakd                      0                   fd99af3c2f91f       yakd-dashboard-799879c74f-gj64l
	881db15d7669e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   c854a60739bf5       local-path-provisioner-8d985888d-blrqx
	bc62acb56fc72       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   c3671dfdd359c       metrics-server-c59844bb4-qfn6h
	7a721d6e9c616       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   8df7bee35d3e0       storage-provisioner
	65933a91dc9ef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   92d703072e50f       coredns-7db6d8ff4d-ktksd
	e792b08ebd527       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                             5 minutes ago       Running             kube-proxy                0                   6c5b966fad82b       kube-proxy-s2kxf
	e0b8a95edb5a4       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                             5 minutes ago       Running             kube-scheduler            0                   e075b49efeab9       kube-scheduler-addons-435911
	8313f11cb4d95       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             5 minutes ago       Running             etcd                      0                   6d70065e627bc       etcd-addons-435911
	fe5a18c9713d2       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                             5 minutes ago       Running             kube-apiserver            0                   f26b3799bdb11       kube-apiserver-addons-435911
	9978a55587a89       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                             5 minutes ago       Running             kube-controller-manager   0                   98a01a0664d4d       kube-controller-manager-addons-435911
	
	
	==> coredns [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3] <==
	[INFO] 10.244.0.7:39345 - 38367 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000135831s
	[INFO] 10.244.0.7:45263 - 65221 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000236831s
	[INFO] 10.244.0.7:45263 - 62522 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095704s
	[INFO] 10.244.0.7:52371 - 15909 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000160921s
	[INFO] 10.244.0.7:52371 - 14631 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000176843s
	[INFO] 10.244.0.7:39423 - 48435 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000189602s
	[INFO] 10.244.0.7:39423 - 32050 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000102393s
	[INFO] 10.244.0.7:47882 - 5362 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000098795s
	[INFO] 10.244.0.7:47882 - 20977 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000092583s
	[INFO] 10.244.0.7:60178 - 20395 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082831s
	[INFO] 10.244.0.7:60178 - 30121 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073424s
	[INFO] 10.244.0.7:52057 - 51165 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000155933s
	[INFO] 10.244.0.7:52057 - 58591 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000235363s
	[INFO] 10.244.0.7:48080 - 56348 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000073001s
	[INFO] 10.244.0.7:48080 - 58626 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000045221s
	[INFO] 10.244.0.22:36571 - 11224 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000538242s
	[INFO] 10.244.0.22:59498 - 37390 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000627304s
	[INFO] 10.244.0.22:51128 - 8813 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000080998s
	[INFO] 10.244.0.22:38099 - 38543 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009486s
	[INFO] 10.244.0.22:52087 - 11175 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007915s
	[INFO] 10.244.0.22:60207 - 13637 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105714s
	[INFO] 10.244.0.22:32927 - 62204 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000478535s
	[INFO] 10.244.0.22:48017 - 11200 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000449084s
	[INFO] 10.244.0.26:43330 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000427723s
	[INFO] 10.244.0.26:35060 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000133427s
	
	
	==> describe nodes <==
	Name:               addons-435911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-435911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=addons-435911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T17_13_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-435911
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:13:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-435911
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:18:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:16:41 +0000   Wed, 17 Jul 2024 17:13:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:16:41 +0000   Wed, 17 Jul 2024 17:13:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:16:41 +0000   Wed, 17 Jul 2024 17:13:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:16:41 +0000   Wed, 17 Jul 2024 17:13:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    addons-435911
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d28c18b66294996a96261bd0a3a851e
	  System UUID:                4d28c18b-6629-4996-a962-61bd0a3a851e
	  Boot ID:                    3c05feed-3801-4256-af02-cf50ab398763
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-sn68h          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  gcp-auth                    gcp-auth-5db96cd9b4-fn48r                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  headlamp                    headlamp-7867546754-znd2v                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                 coredns-7db6d8ff4d-ktksd                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m30s
	  kube-system                 etcd-addons-435911                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m45s
	  kube-system                 kube-apiserver-addons-435911              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 kube-controller-manager-addons-435911     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 kube-proxy-s2kxf                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kube-system                 kube-scheduler-addons-435911              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m45s
	  kube-system                 metrics-server-c59844bb4-qfn6h            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m24s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  local-path-storage          local-path-provisioner-8d985888d-blrqx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  yakd-dashboard              yakd-dashboard-799879c74f-gj64l           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m28s                  kube-proxy       
	  Normal  Starting                 5m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m50s (x8 over 5m50s)  kubelet          Node addons-435911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m50s (x8 over 5m50s)  kubelet          Node addons-435911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m50s (x7 over 5m50s)  kubelet          Node addons-435911 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m44s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m44s                  kubelet          Node addons-435911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m44s                  kubelet          Node addons-435911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m44s                  kubelet          Node addons-435911 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m43s                  kubelet          Node addons-435911 status is now: NodeReady
	  Normal  RegisteredNode           5m31s                  node-controller  Node addons-435911 event: Registered Node addons-435911 in Controller
	
	
	==> dmesg <==
	[  +5.084869] kauditd_printk_skb: 121 callbacks suppressed
	[  +5.011981] kauditd_printk_skb: 131 callbacks suppressed
	[  +5.063076] kauditd_printk_skb: 70 callbacks suppressed
	[ +22.001873] kauditd_printk_skb: 4 callbacks suppressed
	[Jul17 17:14] kauditd_printk_skb: 6 callbacks suppressed
	[  +9.021474] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.639850] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.499565] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.863031] kauditd_printk_skb: 86 callbacks suppressed
	[  +6.315952] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.970508] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.414718] kauditd_printk_skb: 36 callbacks suppressed
	[Jul17 17:15] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.677046] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.986469] kauditd_printk_skb: 22 callbacks suppressed
	[Jul17 17:16] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.565904] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.382067] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.213190] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.048417] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.406326] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.191149] kauditd_printk_skb: 19 callbacks suppressed
	[  +8.393607] kauditd_printk_skb: 33 callbacks suppressed
	[Jul17 17:18] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.296977] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301] <==
	{"level":"info","ts":"2024-07-17T17:14:27.404515Z","caller":"traceutil/trace.go:171","msg":"trace[1863744211] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1064; }","duration":"276.218192ms","start":"2024-07-17T17:14:27.12829Z","end":"2024-07-17T17:14:27.404508Z","steps":["trace[1863744211] 'agreement among raft nodes before linearized reading'  (duration: 276.031687ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:14:27.404529Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.158147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14299"}
	{"level":"info","ts":"2024-07-17T17:14:27.404552Z","caller":"traceutil/trace.go:171","msg":"trace[1162972558] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1064; }","duration":"127.208048ms","start":"2024-07-17T17:14:27.277338Z","end":"2024-07-17T17:14:27.404546Z","steps":["trace[1162972558] 'agreement among raft nodes before linearized reading'  (duration: 127.129644ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T17:15:50.08007Z","caller":"traceutil/trace.go:171","msg":"trace[557754540] linearizableReadLoop","detail":"{readStateIndex:1439; appliedIndex:1439; }","duration":"274.971404ms","start":"2024-07-17T17:15:49.805089Z","end":"2024-07-17T17:15:50.08006Z","steps":["trace[557754540] 'read index received'  (duration: 274.96477ms)","trace[557754540] 'applied index is now lower than readState.Index'  (duration: 5.665µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T17:15:50.080026Z","caller":"traceutil/trace.go:171","msg":"trace[1796510116] transaction","detail":"{read_only:false; response_revision:1390; number_of_response:1; }","duration":"335.461276ms","start":"2024-07-17T17:15:49.744523Z","end":"2024-07-17T17:15:50.079984Z","steps":["trace[1796510116] 'process raft request'  (duration: 335.317541ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:15:50.080434Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:15:49.744504Z","time spent":"335.759054ms","remote":"127.0.0.1:49660","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2010,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/default/task-pv-pod\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/task-pv-pod\" value_size:1968 >> failure:<>"}
	{"level":"warn","ts":"2024-07-17T17:15:50.080576Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.46969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:84094"}
	{"level":"info","ts":"2024-07-17T17:15:50.080614Z","caller":"traceutil/trace.go:171","msg":"trace[631614517] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1390; }","duration":"275.546656ms","start":"2024-07-17T17:15:49.805056Z","end":"2024-07-17T17:15:50.080603Z","steps":["trace[631614517] 'agreement among raft nodes before linearized reading'  (duration: 275.191071ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:15:50.083584Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.120987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-17T17:15:50.083635Z","caller":"traceutil/trace.go:171","msg":"trace[895294486] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1390; }","duration":"189.24201ms","start":"2024-07-17T17:15:49.894384Z","end":"2024-07-17T17:15:50.083626Z","steps":["trace[895294486] 'agreement among raft nodes before linearized reading'  (duration: 189.119811ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T17:16:04.887658Z","caller":"traceutil/trace.go:171","msg":"trace[795762626] transaction","detail":"{read_only:false; response_revision:1447; number_of_response:1; }","duration":"393.674877ms","start":"2024-07-17T17:16:04.49363Z","end":"2024-07-17T17:16:04.887305Z","steps":["trace[795762626] 'process raft request'  (duration: 393.345694ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:16:04.887877Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:16:04.493615Z","time spent":"394.159797ms","remote":"127.0.0.1:49660","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4006,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/tiller-deploy-6677d64bcd-4vwq8\" mod_revision:1439 > success:<request_put:<key:\"/registry/pods/kube-system/tiller-deploy-6677d64bcd-4vwq8\" value_size:3941 >> failure:<request_range:<key:\"/registry/pods/kube-system/tiller-deploy-6677d64bcd-4vwq8\" > >"}
	{"level":"info","ts":"2024-07-17T17:16:04.888943Z","caller":"traceutil/trace.go:171","msg":"trace[1999490122] linearizableReadLoop","detail":"{readStateIndex:1500; appliedIndex:1499; }","duration":"273.530991ms","start":"2024-07-17T17:16:04.615383Z","end":"2024-07-17T17:16:04.888914Z","steps":["trace[1999490122] 'read index received'  (duration: 271.536182ms)","trace[1999490122] 'applied index is now lower than readState.Index'  (duration: 1.993603ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T17:16:04.889154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.760712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.27\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-07-17T17:16:04.891333Z","caller":"traceutil/trace.go:171","msg":"trace[443358345] range","detail":"{range_begin:/registry/masterleases/192.168.39.27; range_end:; response_count:1; response_revision:1447; }","duration":"273.815828ms","start":"2024-07-17T17:16:04.61536Z","end":"2024-07-17T17:16:04.889176Z","steps":["trace[443358345] 'agreement among raft nodes before linearized reading'  (duration: 273.720959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:16:04.897676Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.653076ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:4020"}
	{"level":"info","ts":"2024-07-17T17:16:04.897713Z","caller":"traceutil/trace.go:171","msg":"trace[1028145181] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1447; }","duration":"258.713079ms","start":"2024-07-17T17:16:04.638991Z","end":"2024-07-17T17:16:04.897704Z","steps":["trace[1028145181] 'agreement among raft nodes before linearized reading'  (duration: 258.611801ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:16:04.897834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.188664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-17T17:16:04.897879Z","caller":"traceutil/trace.go:171","msg":"trace[2003933802] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:1447; }","duration":"176.256334ms","start":"2024-07-17T17:16:04.72161Z","end":"2024-07-17T17:16:04.897866Z","steps":["trace[2003933802] 'agreement among raft nodes before linearized reading'  (duration: 176.195225ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T17:16:08.486464Z","caller":"traceutil/trace.go:171","msg":"trace[117054538] linearizableReadLoop","detail":"{readStateIndex:1509; appliedIndex:1508; }","duration":"300.202882ms","start":"2024-07-17T17:16:08.186247Z","end":"2024-07-17T17:16:08.48645Z","steps":["trace[117054538] 'read index received'  (duration: 300.017849ms)","trace[117054538] 'applied index is now lower than readState.Index'  (duration: 184.28µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T17:16:08.486613Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.349937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-17T17:16:08.486649Z","caller":"traceutil/trace.go:171","msg":"trace[345364053] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1455; }","duration":"300.383676ms","start":"2024-07-17T17:16:08.186243Z","end":"2024-07-17T17:16:08.486627Z","steps":["trace[345364053] 'agreement among raft nodes before linearized reading'  (duration: 300.291247ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:16:08.486672Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:16:08.186211Z","time spent":"300.455473ms","remote":"127.0.0.1:49654","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1135,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-07-17T17:16:08.486677Z","caller":"traceutil/trace.go:171","msg":"trace[1885716528] transaction","detail":"{read_only:false; response_revision:1455; number_of_response:1; }","duration":"304.125787ms","start":"2024-07-17T17:16:08.182539Z","end":"2024-07-17T17:16:08.486665Z","steps":["trace[1885716528] 'process raft request'  (duration: 303.757767ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:16:08.486756Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:16:08.182524Z","time spent":"304.188888ms","remote":"127.0.0.1:49748","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-tpbmdt7r7mmwyvrtzhzzmqx3iq\" mod_revision:1410 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-tpbmdt7r7mmwyvrtzhzzmqx3iq\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-tpbmdt7r7mmwyvrtzhzzmqx3iq\" > >"}
	
	
	==> gcp-auth [aa919d6ecaffe5a059fd1f624e32a8769ad52beed2e788f61a7207d198bfdbf3] <==
	2024/07/17 17:14:44 GCP Auth Webhook started!
	2024/07/17 17:15:42 Ready to marshal response ...
	2024/07/17 17:15:42 Ready to write response ...
	2024/07/17 17:15:42 Ready to marshal response ...
	2024/07/17 17:15:42 Ready to write response ...
	2024/07/17 17:15:42 Ready to marshal response ...
	2024/07/17 17:15:42 Ready to write response ...
	2024/07/17 17:15:47 Ready to marshal response ...
	2024/07/17 17:15:47 Ready to write response ...
	2024/07/17 17:15:49 Ready to marshal response ...
	2024/07/17 17:15:49 Ready to write response ...
	2024/07/17 17:15:53 Ready to marshal response ...
	2024/07/17 17:15:53 Ready to write response ...
	2024/07/17 17:16:14 Ready to marshal response ...
	2024/07/17 17:16:14 Ready to write response ...
	2024/07/17 17:16:14 Ready to marshal response ...
	2024/07/17 17:16:14 Ready to write response ...
	2024/07/17 17:16:15 Ready to marshal response ...
	2024/07/17 17:16:15 Ready to write response ...
	2024/07/17 17:16:26 Ready to marshal response ...
	2024/07/17 17:16:26 Ready to write response ...
	2024/07/17 17:16:32 Ready to marshal response ...
	2024/07/17 17:16:32 Ready to write response ...
	2024/07/17 17:18:38 Ready to marshal response ...
	2024/07/17 17:18:38 Ready to write response ...
	
	
	==> kernel <==
	 17:18:49 up 6 min,  0 users,  load average: 0.40, 1.06, 0.61
	Linux addons-435911 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0717 17:15:05.542203       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.195.116:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.195.116:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.195.116:443: connect: connection refused
	E0717 17:15:05.582718       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0717 17:15:05.591327       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 17:15:42.568636       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.127.18"}
	I0717 17:16:09.432205       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 17:16:10.521278       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0717 17:16:15.223780       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 17:16:15.419671       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.253.146"}
	I0717 17:16:16.151558       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0717 17:16:42.845311       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0717 17:16:49.296190       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 17:16:49.296241       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 17:16:49.323339       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 17:16:49.323391       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 17:16:49.332221       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 17:16:49.332274       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 17:16:49.357731       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 17:16:49.357791       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 17:16:49.405202       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 17:16:49.405312       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 17:16:50.332975       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 17:16:50.406100       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 17:16:50.440035       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0717 17:18:38.756039       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.48.36"}
	
	
	==> kube-controller-manager [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f] <==
	W0717 17:17:33.837234       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:17:33.837484       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:17:36.518256       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:17:36.518487       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:17:40.288625       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:17:40.288758       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:18:02.977048       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:18:02.977234       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:18:11.259619       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:18:11.259720       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:18:15.023722       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:18:15.023768       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:18:30.580520       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:18:30.580586       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 17:18:38.627812       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="47.418442ms"
	I0717 17:18:38.635972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="8.104138ms"
	I0717 17:18:38.636330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="106.234µs"
	I0717 17:18:38.639222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="21.701µs"
	I0717 17:18:41.005156       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0717 17:18:41.007750       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="4.452µs"
	I0717 17:18:41.014466       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0717 17:18:42.412140       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="9.687133ms"
	I0717 17:18:42.412272       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="42.22µs"
	W0717 17:18:45.504055       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:18:45.504140       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e] <==
	I0717 17:13:20.610171       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:13:20.623956       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.27"]
	I0717 17:13:20.684209       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:13:20.684261       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:13:20.684288       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:13:20.688312       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:13:20.688587       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:13:20.688608       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:13:20.690165       1 config.go:192] "Starting service config controller"
	I0717 17:13:20.690188       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:13:20.690217       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:13:20.690222       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:13:20.690756       1 config.go:319] "Starting node config controller"
	I0717 17:13:20.690762       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:13:20.790480       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:13:20.790535       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:13:20.790793       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49] <==
	E0717 17:13:03.003445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 17:13:03.003488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 17:13:03.003492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:13:03.003546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:13:03.003613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 17:13:03.003745       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 17:13:03.003527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 17:13:03.003822       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 17:13:03.819127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:13:03.819158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:13:03.951492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 17:13:03.951546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 17:13:04.004089       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:13:04.004129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:13:04.149079       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 17:13:04.149119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 17:13:04.174358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 17:13:04.174457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 17:13:04.186369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 17:13:04.186445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 17:13:04.254490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 17:13:04.254528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 17:13:04.339543       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:13:04.339653       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 17:13:06.799828       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 17:18:38 addons-435911 kubelet[1283]: I0717 17:18:38.616250    1283 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a7a273-f40b-4503-a6f4-00ff9385aeda" containerName="csi-attacher"
	Jul 17 17:18:38 addons-435911 kubelet[1283]: I0717 17:18:38.616256    1283 memory_manager.go:354] "RemoveStaleState removing state" podUID="4379d8e7-b277-4b17-968f-98ee1a746757" containerName="node-driver-registrar"
	Jul 17 17:18:38 addons-435911 kubelet[1283]: I0717 17:18:38.758509    1283 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8bd855e9-5ad2-4b53-a4b8-81a2548d80be-gcp-creds\") pod \"hello-world-app-6778b5fc9f-sn68h\" (UID: \"8bd855e9-5ad2-4b53-a4b8-81a2548d80be\") " pod="default/hello-world-app-6778b5fc9f-sn68h"
	Jul 17 17:18:38 addons-435911 kubelet[1283]: I0717 17:18:38.758836    1283 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4z46\" (UniqueName: \"kubernetes.io/projected/8bd855e9-5ad2-4b53-a4b8-81a2548d80be-kube-api-access-b4z46\") pod \"hello-world-app-6778b5fc9f-sn68h\" (UID: \"8bd855e9-5ad2-4b53-a4b8-81a2548d80be\") " pod="default/hello-world-app-6778b5fc9f-sn68h"
	Jul 17 17:18:39 addons-435911 kubelet[1283]: I0717 17:18:39.766799    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bgkn\" (UniqueName: \"kubernetes.io/projected/5ba15390-d48e-46dd-a033-94fc60c42981-kube-api-access-2bgkn\") pod \"5ba15390-d48e-46dd-a033-94fc60c42981\" (UID: \"5ba15390-d48e-46dd-a033-94fc60c42981\") "
	Jul 17 17:18:39 addons-435911 kubelet[1283]: I0717 17:18:39.768750    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba15390-d48e-46dd-a033-94fc60c42981-kube-api-access-2bgkn" (OuterVolumeSpecName: "kube-api-access-2bgkn") pod "5ba15390-d48e-46dd-a033-94fc60c42981" (UID: "5ba15390-d48e-46dd-a033-94fc60c42981"). InnerVolumeSpecName "kube-api-access-2bgkn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 17:18:39 addons-435911 kubelet[1283]: I0717 17:18:39.867264    1283 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2bgkn\" (UniqueName: \"kubernetes.io/projected/5ba15390-d48e-46dd-a033-94fc60c42981-kube-api-access-2bgkn\") on node \"addons-435911\" DevicePath \"\""
	Jul 17 17:18:40 addons-435911 kubelet[1283]: I0717 17:18:40.375033    1283 scope.go:117] "RemoveContainer" containerID="caf7fb4406148b53bbad9d29fc3fb43150bb4af2e6cdb872225e4fe0a6ea1f3f"
	Jul 17 17:18:40 addons-435911 kubelet[1283]: I0717 17:18:40.410877    1283 scope.go:117] "RemoveContainer" containerID="caf7fb4406148b53bbad9d29fc3fb43150bb4af2e6cdb872225e4fe0a6ea1f3f"
	Jul 17 17:18:40 addons-435911 kubelet[1283]: E0717 17:18:40.411457    1283 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf7fb4406148b53bbad9d29fc3fb43150bb4af2e6cdb872225e4fe0a6ea1f3f\": container with ID starting with caf7fb4406148b53bbad9d29fc3fb43150bb4af2e6cdb872225e4fe0a6ea1f3f not found: ID does not exist" containerID="caf7fb4406148b53bbad9d29fc3fb43150bb4af2e6cdb872225e4fe0a6ea1f3f"
	Jul 17 17:18:40 addons-435911 kubelet[1283]: I0717 17:18:40.411492    1283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf7fb4406148b53bbad9d29fc3fb43150bb4af2e6cdb872225e4fe0a6ea1f3f"} err="failed to get container status \"caf7fb4406148b53bbad9d29fc3fb43150bb4af2e6cdb872225e4fe0a6ea1f3f\": rpc error: code = NotFound desc = could not find container \"caf7fb4406148b53bbad9d29fc3fb43150bb4af2e6cdb872225e4fe0a6ea1f3f\": container with ID starting with caf7fb4406148b53bbad9d29fc3fb43150bb4af2e6cdb872225e4fe0a6ea1f3f not found: ID does not exist"
	Jul 17 17:18:41 addons-435911 kubelet[1283]: I0717 17:18:41.286586    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ba15390-d48e-46dd-a033-94fc60c42981" path="/var/lib/kubelet/pods/5ba15390-d48e-46dd-a033-94fc60c42981/volumes"
	Jul 17 17:18:41 addons-435911 kubelet[1283]: I0717 17:18:41.287104    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93896a00-33ce-4684-8a3e-e27f3b4f025a" path="/var/lib/kubelet/pods/93896a00-33ce-4684-8a3e-e27f3b4f025a/volumes"
	Jul 17 17:18:41 addons-435911 kubelet[1283]: I0717 17:18:41.287562    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956fbce6-bf17-4d3c-af0d-c5e16a8b9064" path="/var/lib/kubelet/pods/956fbce6-bf17-4d3c-af0d-c5e16a8b9064/volumes"
	Jul 17 17:18:44 addons-435911 kubelet[1283]: I0717 17:18:44.299726    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw2fc\" (UniqueName: \"kubernetes.io/projected/f6d9e1dd-adba-422c-985c-253dffb73fa0-kube-api-access-mw2fc\") pod \"f6d9e1dd-adba-422c-985c-253dffb73fa0\" (UID: \"f6d9e1dd-adba-422c-985c-253dffb73fa0\") "
	Jul 17 17:18:44 addons-435911 kubelet[1283]: I0717 17:18:44.299790    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6d9e1dd-adba-422c-985c-253dffb73fa0-webhook-cert\") pod \"f6d9e1dd-adba-422c-985c-253dffb73fa0\" (UID: \"f6d9e1dd-adba-422c-985c-253dffb73fa0\") "
	Jul 17 17:18:44 addons-435911 kubelet[1283]: I0717 17:18:44.302222    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6d9e1dd-adba-422c-985c-253dffb73fa0-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f6d9e1dd-adba-422c-985c-253dffb73fa0" (UID: "f6d9e1dd-adba-422c-985c-253dffb73fa0"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 17:18:44 addons-435911 kubelet[1283]: I0717 17:18:44.303148    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6d9e1dd-adba-422c-985c-253dffb73fa0-kube-api-access-mw2fc" (OuterVolumeSpecName: "kube-api-access-mw2fc") pod "f6d9e1dd-adba-422c-985c-253dffb73fa0" (UID: "f6d9e1dd-adba-422c-985c-253dffb73fa0"). InnerVolumeSpecName "kube-api-access-mw2fc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 17:18:44 addons-435911 kubelet[1283]: I0717 17:18:44.400782    1283 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mw2fc\" (UniqueName: \"kubernetes.io/projected/f6d9e1dd-adba-422c-985c-253dffb73fa0-kube-api-access-mw2fc\") on node \"addons-435911\" DevicePath \"\""
	Jul 17 17:18:44 addons-435911 kubelet[1283]: I0717 17:18:44.400793    1283 scope.go:117] "RemoveContainer" containerID="b863babd35ed8e6e85134c30921d55b438789314832a2394000e1c63f9467ab8"
	Jul 17 17:18:44 addons-435911 kubelet[1283]: I0717 17:18:44.400812    1283 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f6d9e1dd-adba-422c-985c-253dffb73fa0-webhook-cert\") on node \"addons-435911\" DevicePath \"\""
	Jul 17 17:18:44 addons-435911 kubelet[1283]: I0717 17:18:44.415618    1283 scope.go:117] "RemoveContainer" containerID="b863babd35ed8e6e85134c30921d55b438789314832a2394000e1c63f9467ab8"
	Jul 17 17:18:44 addons-435911 kubelet[1283]: E0717 17:18:44.416086    1283 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b863babd35ed8e6e85134c30921d55b438789314832a2394000e1c63f9467ab8\": container with ID starting with b863babd35ed8e6e85134c30921d55b438789314832a2394000e1c63f9467ab8 not found: ID does not exist" containerID="b863babd35ed8e6e85134c30921d55b438789314832a2394000e1c63f9467ab8"
	Jul 17 17:18:44 addons-435911 kubelet[1283]: I0717 17:18:44.416129    1283 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b863babd35ed8e6e85134c30921d55b438789314832a2394000e1c63f9467ab8"} err="failed to get container status \"b863babd35ed8e6e85134c30921d55b438789314832a2394000e1c63f9467ab8\": rpc error: code = NotFound desc = could not find container \"b863babd35ed8e6e85134c30921d55b438789314832a2394000e1c63f9467ab8\": container with ID starting with b863babd35ed8e6e85134c30921d55b438789314832a2394000e1c63f9467ab8 not found: ID does not exist"
	Jul 17 17:18:45 addons-435911 kubelet[1283]: I0717 17:18:45.293960    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6d9e1dd-adba-422c-985c-253dffb73fa0" path="/var/lib/kubelet/pods/f6d9e1dd-adba-422c-985c-253dffb73fa0/volumes"
	
	
	==> storage-provisioner [7a721d6e9c61620875bf344ec13670996a8189bfa2f61fbb74a2396a22c8419f] <==
	I0717 17:13:25.484253       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 17:13:25.736832       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 17:13:25.740984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 17:13:25.808013       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 17:13:25.808146       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-435911_764cb43b-36c6-4c15-abfd-05fbe4f1b787!
	I0717 17:13:25.812035       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e2bc7bc-4a41-4c47-829d-1aeba2a7bb49", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-435911_764cb43b-36c6-4c15-abfd-05fbe4f1b787 became leader
	I0717 17:13:26.009864       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-435911_764cb43b-36c6-4c15-abfd-05fbe4f1b787!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-435911 -n addons-435911
helpers_test.go:261: (dbg) Run:  kubectl --context addons-435911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.12s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (335.27s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 6.908439ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-qfn6h" [594c6a3c-368e-421e-9d3f-ceb3426c0cf7] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004925237s
addons_test.go:417: (dbg) Run:  kubectl --context addons-435911 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435911 top pods -n kube-system: exit status 1 (66.676197ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ktksd, age: 2m51.993777825s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435911 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435911 top pods -n kube-system: exit status 1 (66.259119ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ktksd, age: 2m53.839288835s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435911 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435911 top pods -n kube-system: exit status 1 (73.469043ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ktksd, age: 2m58.333279532s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435911 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435911 top pods -n kube-system: exit status 1 (68.268173ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ktksd, age: 3m2.709797714s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435911 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435911 top pods -n kube-system: exit status 1 (65.510894ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ktksd, age: 3m16.276160421s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435911 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435911 top pods -n kube-system: exit status 1 (80.093004ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ktksd, age: 3m24.93103533s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435911 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435911 top pods -n kube-system: exit status 1 (61.759722ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ktksd, age: 3m53.954063734s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435911 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435911 top pods -n kube-system: exit status 1 (58.733498ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ktksd, age: 4m33.701243785s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435911 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435911 top pods -n kube-system: exit status 1 (59.115001ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ktksd, age: 5m44.838370328s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435911 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435911 top pods -n kube-system: exit status 1 (64.270931ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ktksd, age: 6m59.156756768s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435911 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435911 top pods -n kube-system: exit status 1 (64.534231ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ktksd, age: 8m18.479661711s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-435911 -n addons-435911
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-435911 logs -n 25: (1.298056266s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-865281                                                                     | download-only-865281 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:12 UTC |
	| delete  | -p download-only-285503                                                                     | download-only-285503 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:12 UTC |
	| delete  | -p download-only-840522                                                                     | download-only-840522 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:12 UTC |
	| delete  | -p download-only-865281                                                                     | download-only-865281 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:12 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-325566 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC |                     |
	|         | binary-mirror-325566                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45523                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-325566                                                                     | binary-mirror-325566 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:12 UTC |
	| addons  | enable dashboard -p                                                                         | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC |                     |
	|         | addons-435911                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC |                     |
	|         | addons-435911                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-435911 --wait=true                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:15 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:15 UTC | 17 Jul 24 17:15 UTC |
	|         | -p addons-435911                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:15 UTC | 17 Jul 24 17:15 UTC |
	|         | -p addons-435911                                                                            |                      |         |         |                     |                     |
	| addons  | addons-435911 addons disable                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | addons-435911                                                                               |                      |         |         |                     |                     |
	| ip      | addons-435911 ip                                                                            | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	| addons  | addons-435911 addons disable                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-435911 ssh cat                                                                       | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | /opt/local-path-provisioner/pvc-f3597c1f-ead9-4165-91c7-88a61a002e8f_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-435911 addons disable                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-435911 ssh curl -s                                                                   | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | addons-435911                                                                               |                      |         |         |                     |                     |
	| addons  | addons-435911 addons                                                                        | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-435911 addons                                                                        | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:16 UTC | 17 Jul 24 17:16 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-435911 ip                                                                            | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:18 UTC | 17 Jul 24 17:18 UTC |
	| addons  | addons-435911 addons disable                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:18 UTC | 17 Jul 24 17:18 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-435911 addons disable                                                                | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:18 UTC | 17 Jul 24 17:18 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-435911 addons                                                                        | addons-435911        | jenkins | v1.33.1 | 17 Jul 24 17:21 UTC | 17 Jul 24 17:21 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 17:12:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 17:12:20.366990   22585 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:12:20.367184   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:12:20.367193   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:12:20.367196   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:12:20.367357   22585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:12:20.367882   22585 out.go:298] Setting JSON to false
	I0717 17:12:20.368636   22585 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3283,"bootTime":1721233057,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 17:12:20.368687   22585 start.go:139] virtualization: kvm guest
	I0717 17:12:20.370849   22585 out.go:177] * [addons-435911] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 17:12:20.372158   22585 notify.go:220] Checking for updates...
	I0717 17:12:20.372165   22585 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 17:12:20.373709   22585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 17:12:20.375248   22585 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:12:20.376522   22585 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:12:20.377858   22585 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 17:12:20.379161   22585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 17:12:20.380429   22585 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 17:12:20.411530   22585 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 17:12:20.412986   22585 start.go:297] selected driver: kvm2
	I0717 17:12:20.413011   22585 start.go:901] validating driver "kvm2" against <nil>
	I0717 17:12:20.413024   22585 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 17:12:20.413702   22585 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:12:20.413788   22585 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 17:12:20.427867   22585 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 17:12:20.427918   22585 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 17:12:20.428167   22585 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 17:12:20.428194   22585 cni.go:84] Creating CNI manager for ""
	I0717 17:12:20.428201   22585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 17:12:20.428208   22585 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 17:12:20.428272   22585 start.go:340] cluster config:
	{Name:addons-435911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-435911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:12:20.428417   22585 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:12:20.430193   22585 out.go:177] * Starting "addons-435911" primary control-plane node in "addons-435911" cluster
	I0717 17:12:20.431619   22585 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:12:20.431646   22585 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 17:12:20.431662   22585 cache.go:56] Caching tarball of preloaded images
	I0717 17:12:20.431745   22585 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 17:12:20.431758   22585 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 17:12:20.432088   22585 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/config.json ...
	I0717 17:12:20.432124   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/config.json: {Name:mkdb577ecb5b4431a5b621d57f357237d5e29122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:20.432264   22585 start.go:360] acquireMachinesLock for addons-435911: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 17:12:20.432315   22585 start.go:364] duration metric: took 35.633µs to acquireMachinesLock for "addons-435911"
	I0717 17:12:20.432337   22585 start.go:93] Provisioning new machine with config: &{Name:addons-435911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:addons-435911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:12:20.432400   22585 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 17:12:20.434179   22585 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0717 17:12:20.434293   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:12:20.434332   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:12:20.448111   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I0717 17:12:20.448539   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:12:20.449067   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:12:20.449089   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:12:20.449465   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:12:20.449643   22585 main.go:141] libmachine: (addons-435911) Calling .GetMachineName
	I0717 17:12:20.449782   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:20.449904   22585 start.go:159] libmachine.API.Create for "addons-435911" (driver="kvm2")
	I0717 17:12:20.449932   22585 client.go:168] LocalClient.Create starting
	I0717 17:12:20.449965   22585 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 17:12:20.701602   22585 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 17:12:20.890648   22585 main.go:141] libmachine: Running pre-create checks...
	I0717 17:12:20.890668   22585 main.go:141] libmachine: (addons-435911) Calling .PreCreateCheck
	I0717 17:12:20.891180   22585 main.go:141] libmachine: (addons-435911) Calling .GetConfigRaw
	I0717 17:12:20.891595   22585 main.go:141] libmachine: Creating machine...
	I0717 17:12:20.891615   22585 main.go:141] libmachine: (addons-435911) Calling .Create
	I0717 17:12:20.891772   22585 main.go:141] libmachine: (addons-435911) Creating KVM machine...
	I0717 17:12:20.893174   22585 main.go:141] libmachine: (addons-435911) DBG | found existing default KVM network
	I0717 17:12:20.893930   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:20.893777   22607 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0717 17:12:20.893957   22585 main.go:141] libmachine: (addons-435911) DBG | created network xml: 
	I0717 17:12:20.893972   22585 main.go:141] libmachine: (addons-435911) DBG | <network>
	I0717 17:12:20.893980   22585 main.go:141] libmachine: (addons-435911) DBG |   <name>mk-addons-435911</name>
	I0717 17:12:20.894039   22585 main.go:141] libmachine: (addons-435911) DBG |   <dns enable='no'/>
	I0717 17:12:20.894068   22585 main.go:141] libmachine: (addons-435911) DBG |   
	I0717 17:12:20.894079   22585 main.go:141] libmachine: (addons-435911) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 17:12:20.894087   22585 main.go:141] libmachine: (addons-435911) DBG |     <dhcp>
	I0717 17:12:20.894094   22585 main.go:141] libmachine: (addons-435911) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 17:12:20.894099   22585 main.go:141] libmachine: (addons-435911) DBG |     </dhcp>
	I0717 17:12:20.894104   22585 main.go:141] libmachine: (addons-435911) DBG |   </ip>
	I0717 17:12:20.894108   22585 main.go:141] libmachine: (addons-435911) DBG |   
	I0717 17:12:20.894114   22585 main.go:141] libmachine: (addons-435911) DBG | </network>
	I0717 17:12:20.894121   22585 main.go:141] libmachine: (addons-435911) DBG | 
	I0717 17:12:20.899544   22585 main.go:141] libmachine: (addons-435911) DBG | trying to create private KVM network mk-addons-435911 192.168.39.0/24...
	I0717 17:12:20.960024   22585 main.go:141] libmachine: (addons-435911) DBG | private KVM network mk-addons-435911 192.168.39.0/24 created
	I0717 17:12:20.960053   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:20.959980   22607 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:12:20.960090   22585 main.go:141] libmachine: (addons-435911) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911 ...
	I0717 17:12:20.960125   22585 main.go:141] libmachine: (addons-435911) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 17:12:20.960153   22585 main.go:141] libmachine: (addons-435911) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 17:12:21.190737   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:21.190626   22607 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa...
	I0717 17:12:21.271060   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:21.270962   22607 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/addons-435911.rawdisk...
	I0717 17:12:21.271088   22585 main.go:141] libmachine: (addons-435911) DBG | Writing magic tar header
	I0717 17:12:21.271104   22585 main.go:141] libmachine: (addons-435911) DBG | Writing SSH key tar header
	I0717 17:12:21.271653   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:21.271575   22607 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911 ...
	I0717 17:12:21.271690   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911
	I0717 17:12:21.271705   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 17:12:21.271719   22585 main.go:141] libmachine: (addons-435911) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911 (perms=drwx------)
	I0717 17:12:21.271730   22585 main.go:141] libmachine: (addons-435911) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 17:12:21.271736   22585 main.go:141] libmachine: (addons-435911) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 17:12:21.271743   22585 main.go:141] libmachine: (addons-435911) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 17:12:21.271748   22585 main.go:141] libmachine: (addons-435911) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 17:12:21.271759   22585 main.go:141] libmachine: (addons-435911) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 17:12:21.271767   22585 main.go:141] libmachine: (addons-435911) Creating domain...
	I0717 17:12:21.271777   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:12:21.271792   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 17:12:21.271798   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 17:12:21.271804   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home/jenkins
	I0717 17:12:21.271812   22585 main.go:141] libmachine: (addons-435911) DBG | Checking permissions on dir: /home
	I0717 17:12:21.271821   22585 main.go:141] libmachine: (addons-435911) DBG | Skipping /home - not owner
	I0717 17:12:21.272935   22585 main.go:141] libmachine: (addons-435911) define libvirt domain using xml: 
	I0717 17:12:21.272975   22585 main.go:141] libmachine: (addons-435911) <domain type='kvm'>
	I0717 17:12:21.272984   22585 main.go:141] libmachine: (addons-435911)   <name>addons-435911</name>
	I0717 17:12:21.272989   22585 main.go:141] libmachine: (addons-435911)   <memory unit='MiB'>4000</memory>
	I0717 17:12:21.272995   22585 main.go:141] libmachine: (addons-435911)   <vcpu>2</vcpu>
	I0717 17:12:21.273001   22585 main.go:141] libmachine: (addons-435911)   <features>
	I0717 17:12:21.273032   22585 main.go:141] libmachine: (addons-435911)     <acpi/>
	I0717 17:12:21.273054   22585 main.go:141] libmachine: (addons-435911)     <apic/>
	I0717 17:12:21.273074   22585 main.go:141] libmachine: (addons-435911)     <pae/>
	I0717 17:12:21.273088   22585 main.go:141] libmachine: (addons-435911)     
	I0717 17:12:21.273100   22585 main.go:141] libmachine: (addons-435911)   </features>
	I0717 17:12:21.273113   22585 main.go:141] libmachine: (addons-435911)   <cpu mode='host-passthrough'>
	I0717 17:12:21.273121   22585 main.go:141] libmachine: (addons-435911)   
	I0717 17:12:21.273134   22585 main.go:141] libmachine: (addons-435911)   </cpu>
	I0717 17:12:21.273145   22585 main.go:141] libmachine: (addons-435911)   <os>
	I0717 17:12:21.273154   22585 main.go:141] libmachine: (addons-435911)     <type>hvm</type>
	I0717 17:12:21.273163   22585 main.go:141] libmachine: (addons-435911)     <boot dev='cdrom'/>
	I0717 17:12:21.273167   22585 main.go:141] libmachine: (addons-435911)     <boot dev='hd'/>
	I0717 17:12:21.273173   22585 main.go:141] libmachine: (addons-435911)     <bootmenu enable='no'/>
	I0717 17:12:21.273179   22585 main.go:141] libmachine: (addons-435911)   </os>
	I0717 17:12:21.273189   22585 main.go:141] libmachine: (addons-435911)   <devices>
	I0717 17:12:21.273201   22585 main.go:141] libmachine: (addons-435911)     <disk type='file' device='cdrom'>
	I0717 17:12:21.273210   22585 main.go:141] libmachine: (addons-435911)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/boot2docker.iso'/>
	I0717 17:12:21.273217   22585 main.go:141] libmachine: (addons-435911)       <target dev='hdc' bus='scsi'/>
	I0717 17:12:21.273223   22585 main.go:141] libmachine: (addons-435911)       <readonly/>
	I0717 17:12:21.273229   22585 main.go:141] libmachine: (addons-435911)     </disk>
	I0717 17:12:21.273235   22585 main.go:141] libmachine: (addons-435911)     <disk type='file' device='disk'>
	I0717 17:12:21.273243   22585 main.go:141] libmachine: (addons-435911)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 17:12:21.273251   22585 main.go:141] libmachine: (addons-435911)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/addons-435911.rawdisk'/>
	I0717 17:12:21.273262   22585 main.go:141] libmachine: (addons-435911)       <target dev='hda' bus='virtio'/>
	I0717 17:12:21.273267   22585 main.go:141] libmachine: (addons-435911)     </disk>
	I0717 17:12:21.273276   22585 main.go:141] libmachine: (addons-435911)     <interface type='network'>
	I0717 17:12:21.273282   22585 main.go:141] libmachine: (addons-435911)       <source network='mk-addons-435911'/>
	I0717 17:12:21.273286   22585 main.go:141] libmachine: (addons-435911)       <model type='virtio'/>
	I0717 17:12:21.273292   22585 main.go:141] libmachine: (addons-435911)     </interface>
	I0717 17:12:21.273299   22585 main.go:141] libmachine: (addons-435911)     <interface type='network'>
	I0717 17:12:21.273305   22585 main.go:141] libmachine: (addons-435911)       <source network='default'/>
	I0717 17:12:21.273310   22585 main.go:141] libmachine: (addons-435911)       <model type='virtio'/>
	I0717 17:12:21.273318   22585 main.go:141] libmachine: (addons-435911)     </interface>
	I0717 17:12:21.273322   22585 main.go:141] libmachine: (addons-435911)     <serial type='pty'>
	I0717 17:12:21.273329   22585 main.go:141] libmachine: (addons-435911)       <target port='0'/>
	I0717 17:12:21.273333   22585 main.go:141] libmachine: (addons-435911)     </serial>
	I0717 17:12:21.273345   22585 main.go:141] libmachine: (addons-435911)     <console type='pty'>
	I0717 17:12:21.273354   22585 main.go:141] libmachine: (addons-435911)       <target type='serial' port='0'/>
	I0717 17:12:21.273360   22585 main.go:141] libmachine: (addons-435911)     </console>
	I0717 17:12:21.273372   22585 main.go:141] libmachine: (addons-435911)     <rng model='virtio'>
	I0717 17:12:21.273381   22585 main.go:141] libmachine: (addons-435911)       <backend model='random'>/dev/random</backend>
	I0717 17:12:21.273388   22585 main.go:141] libmachine: (addons-435911)     </rng>
	I0717 17:12:21.273393   22585 main.go:141] libmachine: (addons-435911)     
	I0717 17:12:21.273400   22585 main.go:141] libmachine: (addons-435911)     
	I0717 17:12:21.273405   22585 main.go:141] libmachine: (addons-435911)   </devices>
	I0717 17:12:21.273409   22585 main.go:141] libmachine: (addons-435911) </domain>
	I0717 17:12:21.273416   22585 main.go:141] libmachine: (addons-435911) 
	I0717 17:12:21.279156   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:24:c5:64 in network default
	I0717 17:12:21.279689   22585 main.go:141] libmachine: (addons-435911) Ensuring networks are active...
	I0717 17:12:21.279706   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:21.280307   22585 main.go:141] libmachine: (addons-435911) Ensuring network default is active
	I0717 17:12:21.280635   22585 main.go:141] libmachine: (addons-435911) Ensuring network mk-addons-435911 is active
	I0717 17:12:21.281111   22585 main.go:141] libmachine: (addons-435911) Getting domain xml...
	I0717 17:12:21.281739   22585 main.go:141] libmachine: (addons-435911) Creating domain...
	I0717 17:12:22.663364   22585 main.go:141] libmachine: (addons-435911) Waiting to get IP...
	I0717 17:12:22.664232   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:22.664615   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:22.664646   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:22.664600   22607 retry.go:31] will retry after 247.523027ms: waiting for machine to come up
	I0717 17:12:22.914133   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:22.914537   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:22.914561   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:22.914504   22607 retry.go:31] will retry after 330.822603ms: waiting for machine to come up
	I0717 17:12:23.246937   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:23.247313   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:23.247342   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:23.247269   22607 retry.go:31] will retry after 384.776946ms: waiting for machine to come up
	I0717 17:12:23.633885   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:23.634274   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:23.634298   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:23.634225   22607 retry.go:31] will retry after 371.079585ms: waiting for machine to come up
	I0717 17:12:24.006814   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:24.007316   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:24.007359   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:24.007284   22607 retry.go:31] will retry after 675.440496ms: waiting for machine to come up
	I0717 17:12:24.684266   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:24.684682   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:24.684702   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:24.684662   22607 retry.go:31] will retry after 718.016746ms: waiting for machine to come up
	I0717 17:12:25.404589   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:25.405027   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:25.405045   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:25.405013   22607 retry.go:31] will retry after 828.529004ms: waiting for machine to come up
	I0717 17:12:26.235561   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:26.235986   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:26.236010   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:26.235972   22607 retry.go:31] will retry after 1.204384515s: waiting for machine to come up
	I0717 17:12:27.442372   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:27.442919   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:27.442949   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:27.442884   22607 retry.go:31] will retry after 1.146713076s: waiting for machine to come up
	I0717 17:12:28.591279   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:28.591820   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:28.591849   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:28.591723   22607 retry.go:31] will retry after 1.784907319s: waiting for machine to come up
	I0717 17:12:30.378557   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:30.378986   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:30.379014   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:30.378933   22607 retry.go:31] will retry after 2.189248903s: waiting for machine to come up
	I0717 17:12:32.569289   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:32.569746   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:32.569768   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:32.569709   22607 retry.go:31] will retry after 2.991910233s: waiting for machine to come up
	I0717 17:12:35.563308   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:35.563703   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:35.563729   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:35.563675   22607 retry.go:31] will retry after 3.89189793s: waiting for machine to come up
	I0717 17:12:39.459734   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:39.460097   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find current IP address of domain addons-435911 in network mk-addons-435911
	I0717 17:12:39.460117   22585 main.go:141] libmachine: (addons-435911) DBG | I0717 17:12:39.460059   22607 retry.go:31] will retry after 5.371779373s: waiting for machine to come up
	I0717 17:12:44.836315   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:44.836725   22585 main.go:141] libmachine: (addons-435911) Found IP for machine: 192.168.39.27
	I0717 17:12:44.836749   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has current primary IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:44.836759   22585 main.go:141] libmachine: (addons-435911) Reserving static IP address...
	I0717 17:12:44.837027   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find host DHCP lease matching {name: "addons-435911", mac: "52:54:00:9b:64:f4", ip: "192.168.39.27"} in network mk-addons-435911
	I0717 17:12:44.903693   22585 main.go:141] libmachine: (addons-435911) DBG | Getting to WaitForSSH function...
	I0717 17:12:44.903720   22585 main.go:141] libmachine: (addons-435911) Reserved static IP address: 192.168.39.27
	I0717 17:12:44.903760   22585 main.go:141] libmachine: (addons-435911) Waiting for SSH to be available...
	I0717 17:12:44.905971   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:44.906372   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911
	I0717 17:12:44.906398   22585 main.go:141] libmachine: (addons-435911) DBG | unable to find defined IP address of network mk-addons-435911 interface with MAC address 52:54:00:9b:64:f4
	I0717 17:12:44.906547   22585 main.go:141] libmachine: (addons-435911) DBG | Using SSH client type: external
	I0717 17:12:44.906572   22585 main.go:141] libmachine: (addons-435911) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa (-rw-------)
	I0717 17:12:44.906616   22585 main.go:141] libmachine: (addons-435911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 17:12:44.906645   22585 main.go:141] libmachine: (addons-435911) DBG | About to run SSH command:
	I0717 17:12:44.906680   22585 main.go:141] libmachine: (addons-435911) DBG | exit 0
	I0717 17:12:44.917214   22585 main.go:141] libmachine: (addons-435911) DBG | SSH cmd err, output: exit status 255: 
	I0717 17:12:44.917239   22585 main.go:141] libmachine: (addons-435911) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 17:12:44.917249   22585 main.go:141] libmachine: (addons-435911) DBG | command : exit 0
	I0717 17:12:44.917261   22585 main.go:141] libmachine: (addons-435911) DBG | err     : exit status 255
	I0717 17:12:44.917272   22585 main.go:141] libmachine: (addons-435911) DBG | output  : 
	I0717 17:12:47.918853   22585 main.go:141] libmachine: (addons-435911) DBG | Getting to WaitForSSH function...
	I0717 17:12:47.921231   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:47.921588   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:47.921616   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:47.921713   22585 main.go:141] libmachine: (addons-435911) DBG | Using SSH client type: external
	I0717 17:12:47.921754   22585 main.go:141] libmachine: (addons-435911) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa (-rw-------)
	I0717 17:12:47.921776   22585 main.go:141] libmachine: (addons-435911) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 17:12:47.921785   22585 main.go:141] libmachine: (addons-435911) DBG | About to run SSH command:
	I0717 17:12:47.921794   22585 main.go:141] libmachine: (addons-435911) DBG | exit 0
	I0717 17:12:48.048760   22585 main.go:141] libmachine: (addons-435911) DBG | SSH cmd err, output: <nil>: 
	I0717 17:12:48.049073   22585 main.go:141] libmachine: (addons-435911) KVM machine creation complete!
	I0717 17:12:48.049426   22585 main.go:141] libmachine: (addons-435911) Calling .GetConfigRaw
	I0717 17:12:48.049923   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:48.050199   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:48.050337   22585 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 17:12:48.050351   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:12:48.051580   22585 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 17:12:48.051602   22585 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 17:12:48.051617   22585 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 17:12:48.051625   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.054895   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.055306   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.055332   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.055448   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:48.055637   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.055813   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.055948   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:48.056100   22585 main.go:141] libmachine: Using SSH client type: native
	I0717 17:12:48.056323   22585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0717 17:12:48.056336   22585 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 17:12:48.164094   22585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:12:48.164118   22585 main.go:141] libmachine: Detecting the provisioner...
	I0717 17:12:48.164126   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.167033   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.167405   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.167435   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.167616   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:48.167834   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.168053   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.168214   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:48.168391   22585 main.go:141] libmachine: Using SSH client type: native
	I0717 17:12:48.168586   22585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0717 17:12:48.168598   22585 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 17:12:48.280967   22585 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 17:12:48.281039   22585 main.go:141] libmachine: found compatible host: buildroot
	I0717 17:12:48.281046   22585 main.go:141] libmachine: Provisioning with buildroot...
	I0717 17:12:48.281053   22585 main.go:141] libmachine: (addons-435911) Calling .GetMachineName
	I0717 17:12:48.281275   22585 buildroot.go:166] provisioning hostname "addons-435911"
	I0717 17:12:48.281299   22585 main.go:141] libmachine: (addons-435911) Calling .GetMachineName
	I0717 17:12:48.281493   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.283850   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.284164   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.284188   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.284304   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:48.284476   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.284608   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.284718   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:48.284886   22585 main.go:141] libmachine: Using SSH client type: native
	I0717 17:12:48.285074   22585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0717 17:12:48.285087   22585 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-435911 && echo "addons-435911" | sudo tee /etc/hostname
	I0717 17:12:48.410069   22585 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-435911
	
	I0717 17:12:48.410094   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.412902   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.413231   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.413258   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.413425   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:48.413613   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.413764   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.413903   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:48.414052   22585 main.go:141] libmachine: Using SSH client type: native
	I0717 17:12:48.414220   22585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0717 17:12:48.414236   22585 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-435911' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-435911/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-435911' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 17:12:48.532314   22585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:12:48.532344   22585 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 17:12:48.532370   22585 buildroot.go:174] setting up certificates
	I0717 17:12:48.532379   22585 provision.go:84] configureAuth start
	I0717 17:12:48.532387   22585 main.go:141] libmachine: (addons-435911) Calling .GetMachineName
	I0717 17:12:48.532630   22585 main.go:141] libmachine: (addons-435911) Calling .GetIP
	I0717 17:12:48.535212   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.535528   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.535554   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.535720   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.537747   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.538049   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.538078   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.538206   22585 provision.go:143] copyHostCerts
	I0717 17:12:48.538294   22585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 17:12:48.538424   22585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 17:12:48.538491   22585 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 17:12:48.538550   22585 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.addons-435911 san=[127.0.0.1 192.168.39.27 addons-435911 localhost minikube]
	I0717 17:12:48.622659   22585 provision.go:177] copyRemoteCerts
	I0717 17:12:48.622715   22585 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 17:12:48.622739   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.625089   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.625450   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.625479   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.625676   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:48.625864   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.626027   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:48.626143   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:12:48.710270   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 17:12:48.732149   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 17:12:48.753354   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 17:12:48.774420   22585 provision.go:87] duration metric: took 242.030333ms to configureAuth
	I0717 17:12:48.774446   22585 buildroot.go:189] setting minikube options for container-runtime
	I0717 17:12:48.774642   22585 config.go:182] Loaded profile config "addons-435911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:12:48.774725   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:48.777231   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.777637   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:48.777666   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:48.777912   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:48.778066   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.778218   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:48.778341   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:48.778505   22585 main.go:141] libmachine: Using SSH client type: native
	I0717 17:12:48.778710   22585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0717 17:12:48.778726   22585 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 17:12:49.032520   22585 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 17:12:49.032549   22585 main.go:141] libmachine: Checking connection to Docker...
	I0717 17:12:49.032565   22585 main.go:141] libmachine: (addons-435911) Calling .GetURL
	I0717 17:12:49.033829   22585 main.go:141] libmachine: (addons-435911) DBG | Using libvirt version 6000000
	I0717 17:12:49.035798   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.036113   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.036143   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.036315   22585 main.go:141] libmachine: Docker is up and running!
	I0717 17:12:49.036345   22585 main.go:141] libmachine: Reticulating splines...
	I0717 17:12:49.036353   22585 client.go:171] duration metric: took 28.586414531s to LocalClient.Create
	I0717 17:12:49.036381   22585 start.go:167] duration metric: took 28.586477393s to libmachine.API.Create "addons-435911"
	I0717 17:12:49.036392   22585 start.go:293] postStartSetup for "addons-435911" (driver="kvm2")
	I0717 17:12:49.036405   22585 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 17:12:49.036420   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:49.036654   22585 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 17:12:49.036677   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:49.038670   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.038978   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.039013   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.039149   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:49.039343   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:49.039557   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:49.039747   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:12:49.126359   22585 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 17:12:49.129885   22585 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 17:12:49.129906   22585 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 17:12:49.129971   22585 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 17:12:49.130003   22585 start.go:296] duration metric: took 93.60127ms for postStartSetup
	I0717 17:12:49.130037   22585 main.go:141] libmachine: (addons-435911) Calling .GetConfigRaw
	I0717 17:12:49.130544   22585 main.go:141] libmachine: (addons-435911) Calling .GetIP
	I0717 17:12:49.132876   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.133220   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.133242   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.133505   22585 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/config.json ...
	I0717 17:12:49.133872   22585 start.go:128] duration metric: took 28.701458337s to createHost
	I0717 17:12:49.133914   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:49.135858   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.136178   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.136203   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.136361   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:49.136506   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:49.136672   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:49.136925   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:49.137124   22585 main.go:141] libmachine: Using SSH client type: native
	I0717 17:12:49.137269   22585 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0717 17:12:49.137279   22585 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 17:12:49.249043   22585 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721236369.232773094
	
	I0717 17:12:49.249062   22585 fix.go:216] guest clock: 1721236369.232773094
	I0717 17:12:49.249071   22585 fix.go:229] Guest: 2024-07-17 17:12:49.232773094 +0000 UTC Remote: 2024-07-17 17:12:49.133891028 +0000 UTC m=+28.797781974 (delta=98.882066ms)
	I0717 17:12:49.249122   22585 fix.go:200] guest clock delta is within tolerance: 98.882066ms
	I0717 17:12:49.249133   22585 start.go:83] releasing machines lock for "addons-435911", held for 28.816804737s
	I0717 17:12:49.249164   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:49.249442   22585 main.go:141] libmachine: (addons-435911) Calling .GetIP
	I0717 17:12:49.251770   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.252124   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.252157   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.252326   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:49.252744   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:49.252902   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:12:49.252997   22585 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 17:12:49.253047   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:49.253098   22585 ssh_runner.go:195] Run: cat /version.json
	I0717 17:12:49.253121   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:12:49.255579   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.255900   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.255927   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.255944   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.256131   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:49.256291   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:49.256298   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:49.256324   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:49.256426   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:49.256489   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:12:49.256550   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:12:49.256637   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:12:49.256752   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:12:49.256896   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:12:49.395192   22585 ssh_runner.go:195] Run: systemctl --version
	I0717 17:12:49.400814   22585 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 17:12:49.559240   22585 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 17:12:49.564537   22585 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 17:12:49.564604   22585 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 17:12:49.579940   22585 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 17:12:49.579968   22585 start.go:495] detecting cgroup driver to use...
	I0717 17:12:49.580029   22585 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 17:12:49.596395   22585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 17:12:49.609240   22585 docker.go:217] disabling cri-docker service (if available) ...
	I0717 17:12:49.609285   22585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 17:12:49.621479   22585 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 17:12:49.633458   22585 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 17:12:49.738766   22585 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 17:12:49.872432   22585 docker.go:233] disabling docker service ...
	I0717 17:12:49.872506   22585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 17:12:49.886498   22585 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 17:12:49.898345   22585 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 17:12:50.022237   22585 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 17:12:50.151491   22585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 17:12:50.165373   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 17:12:50.182872   22585 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 17:12:50.182924   22585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.192111   22585 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 17:12:50.192165   22585 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.201488   22585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.210877   22585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.220202   22585 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 17:12:50.229671   22585 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.238829   22585 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.254364   22585 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:12:50.263454   22585 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 17:12:50.271714   22585 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 17:12:50.271774   22585 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 17:12:50.283367   22585 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 17:12:50.293533   22585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:12:50.412573   22585 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 17:12:50.542071   22585 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 17:12:50.542165   22585 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 17:12:50.546584   22585 start.go:563] Will wait 60s for crictl version
	I0717 17:12:50.546657   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:12:50.550138   22585 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 17:12:50.588068   22585 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 17:12:50.588171   22585 ssh_runner.go:195] Run: crio --version
	I0717 17:12:50.613570   22585 ssh_runner.go:195] Run: crio --version
	I0717 17:12:50.640793   22585 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 17:12:50.642142   22585 main.go:141] libmachine: (addons-435911) Calling .GetIP
	I0717 17:12:50.644630   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:50.644980   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:12:50.645006   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:12:50.645190   22585 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 17:12:50.648870   22585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:12:50.660006   22585 kubeadm.go:883] updating cluster {Name:addons-435911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:addons-435911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 17:12:50.660100   22585 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:12:50.660136   22585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 17:12:50.689506   22585 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 17:12:50.689565   22585 ssh_runner.go:195] Run: which lz4
	I0717 17:12:50.692956   22585 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 17:12:50.696514   22585 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 17:12:50.696540   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 17:12:51.816361   22585 crio.go:462] duration metric: took 1.123440309s to copy over tarball
	I0717 17:12:51.816426   22585 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 17:12:53.941290   22585 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.124828904s)
	I0717 17:12:53.941322   22585 crio.go:469] duration metric: took 2.124931521s to extract the tarball
	I0717 17:12:53.941331   22585 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 17:12:53.978909   22585 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 17:12:54.017860   22585 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 17:12:54.017881   22585 cache_images.go:84] Images are preloaded, skipping loading
	I0717 17:12:54.017889   22585 kubeadm.go:934] updating node { 192.168.39.27 8443 v1.30.2 crio true true} ...
	I0717 17:12:54.017992   22585 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-435911 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-435911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 17:12:54.018059   22585 ssh_runner.go:195] Run: crio config
	I0717 17:12:54.064558   22585 cni.go:84] Creating CNI manager for ""
	I0717 17:12:54.064582   22585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 17:12:54.064599   22585 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 17:12:54.064618   22585 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.27 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-435911 NodeName:addons-435911 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.27 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 17:12:54.064748   22585 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.27
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-435911"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.27
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.27"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 17:12:54.064802   22585 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 17:12:54.074116   22585 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 17:12:54.074175   22585 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 17:12:54.082778   22585 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0717 17:12:54.097885   22585 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 17:12:54.112059   22585 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0717 17:12:54.126647   22585 ssh_runner.go:195] Run: grep 192.168.39.27	control-plane.minikube.internal$ /etc/hosts
	I0717 17:12:54.130044   22585 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.27	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:12:54.140671   22585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:12:54.266939   22585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:12:54.282934   22585 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911 for IP: 192.168.39.27
	I0717 17:12:54.282960   22585 certs.go:194] generating shared ca certs ...
	I0717 17:12:54.282987   22585 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.283187   22585 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 17:12:54.473224   22585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt ...
	I0717 17:12:54.473253   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt: {Name:mk17882ef5dcf40e93d7619736a48c61e30e328f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.473427   22585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key ...
	I0717 17:12:54.473439   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key: {Name:mk0fca5350592dfe5ae9d9677aec02e7fe7cc35c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.473507   22585 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 17:12:54.586696   22585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt ...
	I0717 17:12:54.586720   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt: {Name:mk4eea84367f846b920e703dd452e9f97fd8ad6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.586863   22585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key ...
	I0717 17:12:54.586872   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key: {Name:mkf201638f64cc3da374fe05d83585c5e0d0e704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.586935   22585 certs.go:256] generating profile certs ...
	I0717 17:12:54.586986   22585 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.key
	I0717 17:12:54.586999   22585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt with IP's: []
	I0717 17:12:54.668550   22585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt ...
	I0717 17:12:54.668576   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: {Name:mk357a8842a686268c508f5a902817e5bdcbe059 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.668719   22585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.key ...
	I0717 17:12:54.668728   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.key: {Name:mk4dc1c4180c409187e71d4006f58e4110a1c65a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.668793   22585 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.key.fd341990
	I0717 17:12:54.668810   22585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.crt.fd341990 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.27]
	I0717 17:12:54.931866   22585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.crt.fd341990 ...
	I0717 17:12:54.931895   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.crt.fd341990: {Name:mk982a56b4590d26e0b84c44a3e89439bfaadaab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.932043   22585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.key.fd341990 ...
	I0717 17:12:54.932055   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.key.fd341990: {Name:mk0f5fca9e43e6ff2c28cbdea47a8aba49c8ceb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:54.932122   22585 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.crt.fd341990 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.crt
	I0717 17:12:54.932217   22585 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.key.fd341990 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.key
	I0717 17:12:54.932268   22585 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.key
	I0717 17:12:54.932285   22585 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.crt with IP's: []
	I0717 17:12:55.135230   22585 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.crt ...
	I0717 17:12:55.135262   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.crt: {Name:mk7b35d8183089ba13b7664c58a1b1bac1809062 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:55.135441   22585 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.key ...
	I0717 17:12:55.135455   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.key: {Name:mk5be018df9a3c93dbcf168de48d35577e14e28c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:12:55.135649   22585 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 17:12:55.135683   22585 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 17:12:55.135706   22585 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 17:12:55.135728   22585 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 17:12:55.136234   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 17:12:55.159926   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 17:12:55.181764   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 17:12:55.203105   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 17:12:55.223656   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 17:12:55.244466   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 17:12:55.264750   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 17:12:55.285254   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 17:12:55.305892   22585 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 17:12:55.326163   22585 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 17:12:55.340858   22585 ssh_runner.go:195] Run: openssl version
	I0717 17:12:55.345887   22585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 17:12:55.355009   22585 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:12:55.358796   22585 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:12:55.358841   22585 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:12:55.363751   22585 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 17:12:55.373131   22585 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 17:12:55.376542   22585 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 17:12:55.376595   22585 kubeadm.go:392] StartCluster: {Name:addons-435911 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 C
lusterName:addons-435911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:12:55.376664   22585 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 17:12:55.376710   22585 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 17:12:55.413278   22585 cri.go:89] found id: ""
	I0717 17:12:55.413375   22585 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 17:12:55.422460   22585 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 17:12:55.431257   22585 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 17:12:55.439966   22585 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 17:12:55.439994   22585 kubeadm.go:157] found existing configuration files:
	
	I0717 17:12:55.440043   22585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 17:12:55.448232   22585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 17:12:55.448281   22585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 17:12:55.457408   22585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 17:12:55.465504   22585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 17:12:55.465549   22585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 17:12:55.473958   22585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 17:12:55.481917   22585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 17:12:55.481965   22585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 17:12:55.490266   22585 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 17:12:55.498275   22585 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 17:12:55.498329   22585 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 17:12:55.506459   22585 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 17:12:55.566641   22585 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 17:12:55.566715   22585 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 17:12:55.692512   22585 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 17:12:55.692640   22585 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 17:12:55.692783   22585 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 17:12:55.898992   22585 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 17:12:56.068000   22585 out.go:204]   - Generating certificates and keys ...
	I0717 17:12:56.068114   22585 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 17:12:56.068186   22585 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 17:12:56.177420   22585 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 17:12:56.307917   22585 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 17:12:56.550912   22585 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 17:12:56.837583   22585 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 17:12:56.967747   22585 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 17:12:56.967962   22585 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-435911 localhost] and IPs [192.168.39.27 127.0.0.1 ::1]
	I0717 17:12:57.343309   22585 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 17:12:57.343455   22585 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-435911 localhost] and IPs [192.168.39.27 127.0.0.1 ::1]
	I0717 17:12:57.471197   22585 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 17:12:57.602649   22585 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 17:12:57.817098   22585 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 17:12:57.817247   22585 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 17:12:57.967075   22585 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 17:12:58.337958   22585 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 17:12:58.522373   22585 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 17:12:58.690117   22585 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 17:12:58.902991   22585 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 17:12:58.903540   22585 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 17:12:58.905931   22585 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 17:12:58.907839   22585 out.go:204]   - Booting up control plane ...
	I0717 17:12:58.907968   22585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 17:12:58.908591   22585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 17:12:58.909329   22585 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 17:12:58.922822   22585 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 17:12:58.923787   22585 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 17:12:58.923848   22585 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 17:12:59.071566   22585 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 17:12:59.071657   22585 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 17:13:00.072967   22585 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001727376s
	I0717 17:13:00.073089   22585 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 17:13:04.573653   22585 kubeadm.go:310] [api-check] The API server is healthy after 4.502109739s
	I0717 17:13:04.585369   22585 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 17:13:04.603555   22585 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 17:13:04.640321   22585 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 17:13:04.640503   22585 kubeadm.go:310] [mark-control-plane] Marking the node addons-435911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 17:13:04.652822   22585 kubeadm.go:310] [bootstrap-token] Using token: ch7c38.n9iekpckubhriss0
	I0717 17:13:04.654043   22585 out.go:204]   - Configuring RBAC rules ...
	I0717 17:13:04.654161   22585 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 17:13:04.659894   22585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 17:13:04.668604   22585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 17:13:04.671334   22585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 17:13:04.674287   22585 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 17:13:04.677592   22585 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 17:13:04.981205   22585 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 17:13:05.418221   22585 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 17:13:05.983124   22585 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 17:13:05.984106   22585 kubeadm.go:310] 
	I0717 17:13:05.984196   22585 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 17:13:05.984214   22585 kubeadm.go:310] 
	I0717 17:13:05.984319   22585 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 17:13:05.984335   22585 kubeadm.go:310] 
	I0717 17:13:05.984376   22585 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 17:13:05.984458   22585 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 17:13:05.984628   22585 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 17:13:05.984647   22585 kubeadm.go:310] 
	I0717 17:13:05.984722   22585 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 17:13:05.984735   22585 kubeadm.go:310] 
	I0717 17:13:05.984805   22585 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 17:13:05.984813   22585 kubeadm.go:310] 
	I0717 17:13:05.984854   22585 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 17:13:05.984916   22585 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 17:13:05.985006   22585 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 17:13:05.985014   22585 kubeadm.go:310] 
	I0717 17:13:05.985081   22585 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 17:13:05.985146   22585 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 17:13:05.985152   22585 kubeadm.go:310] 
	I0717 17:13:05.985270   22585 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ch7c38.n9iekpckubhriss0 \
	I0717 17:13:05.985427   22585 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 17:13:05.985467   22585 kubeadm.go:310] 	--control-plane 
	I0717 17:13:05.985476   22585 kubeadm.go:310] 
	I0717 17:13:05.985583   22585 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 17:13:05.985594   22585 kubeadm.go:310] 
	I0717 17:13:05.985696   22585 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ch7c38.n9iekpckubhriss0 \
	I0717 17:13:05.985809   22585 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 17:13:05.986174   22585 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 17:13:05.986194   22585 cni.go:84] Creating CNI manager for ""
	I0717 17:13:05.986201   22585 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 17:13:05.987558   22585 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 17:13:05.988693   22585 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 17:13:05.998610   22585 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 17:13:06.017108   22585 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 17:13:06.017181   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:06.017196   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-435911 minikube.k8s.io/updated_at=2024_07_17T17_13_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=addons-435911 minikube.k8s.io/primary=true
	I0717 17:13:06.044030   22585 ops.go:34] apiserver oom_adj: -16
	I0717 17:13:06.149487   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:06.650151   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:07.150169   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:07.650579   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:08.149630   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:08.650545   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:09.149767   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:09.649718   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:10.150152   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:10.650485   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:11.149879   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:11.649857   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:12.149693   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:12.650443   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:13.150170   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:13.649826   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:14.150148   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:14.649892   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:15.150451   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:15.649910   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:16.150484   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:16.649656   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:17.150558   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:17.650581   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:18.149764   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:18.650282   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:19.149582   22585 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:13:19.258555   22585 kubeadm.go:1113] duration metric: took 13.241430959s to wait for elevateKubeSystemPrivileges
	I0717 17:13:19.258595   22585 kubeadm.go:394] duration metric: took 23.882003299s to StartCluster
	I0717 17:13:19.258620   22585 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:13:19.258753   22585 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:13:19.259238   22585 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:13:19.259452   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 17:13:19.259480   22585 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:13:19.259547   22585 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 17:13:19.259649   22585 addons.go:69] Setting yakd=true in profile "addons-435911"
	I0717 17:13:19.259678   22585 addons.go:234] Setting addon yakd=true in "addons-435911"
	I0717 17:13:19.259706   22585 addons.go:69] Setting gcp-auth=true in profile "addons-435911"
	I0717 17:13:19.259734   22585 mustload.go:65] Loading cluster: addons-435911
	I0717 17:13:19.259735   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.259733   22585 config.go:182] Loaded profile config "addons-435911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:13:19.259687   22585 addons.go:69] Setting cloud-spanner=true in profile "addons-435911"
	I0717 17:13:19.259789   22585 addons.go:69] Setting storage-provisioner=true in profile "addons-435911"
	I0717 17:13:19.259807   22585 addons.go:234] Setting addon cloud-spanner=true in "addons-435911"
	I0717 17:13:19.259820   22585 addons.go:234] Setting addon storage-provisioner=true in "addons-435911"
	I0717 17:13:19.259839   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.259846   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.259911   22585 config.go:182] Loaded profile config "addons-435911": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:13:19.260132   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260161   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.260227   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260245   22585 addons.go:69] Setting helm-tiller=true in profile "addons-435911"
	I0717 17:13:19.260260   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.260270   22585 addons.go:234] Setting addon helm-tiller=true in "addons-435911"
	I0717 17:13:19.260301   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.260348   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260374   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.260443   22585 addons.go:69] Setting ingress-dns=true in profile "addons-435911"
	I0717 17:13:19.260446   22585 addons.go:69] Setting ingress=true in profile "addons-435911"
	I0717 17:13:19.260479   22585 addons.go:234] Setting addon ingress=true in "addons-435911"
	I0717 17:13:19.260479   22585 addons.go:234] Setting addon ingress-dns=true in "addons-435911"
	I0717 17:13:19.260514   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.260517   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.260618   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260640   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.259693   22585 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-435911"
	I0717 17:13:19.260767   22585 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-435911"
	I0717 17:13:19.260776   22585 addons.go:69] Setting registry=true in profile "addons-435911"
	I0717 17:13:19.260802   22585 addons.go:234] Setting addon registry=true in "addons-435911"
	I0717 17:13:19.260803   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.260829   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.260236   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260873   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260882   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.260895   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.260893   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.260928   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.261184   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.261198   22585 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-435911"
	I0717 17:13:19.261213   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.261221   22585 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-435911"
	I0717 17:13:19.261243   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.259702   22585 addons.go:69] Setting metrics-server=true in profile "addons-435911"
	I0717 17:13:19.261426   22585 addons.go:234] Setting addon metrics-server=true in "addons-435911"
	I0717 17:13:19.261453   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.261547   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.261565   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.261862   22585 addons.go:69] Setting volcano=true in profile "addons-435911"
	I0717 17:13:19.261895   22585 addons.go:234] Setting addon volcano=true in "addons-435911"
	I0717 17:13:19.261925   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.262288   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.262335   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.263313   22585 out.go:177] * Verifying Kubernetes components...
	I0717 17:13:19.263320   22585 addons.go:69] Setting volumesnapshots=true in profile "addons-435911"
	I0717 17:13:19.263360   22585 addons.go:234] Setting addon volumesnapshots=true in "addons-435911"
	I0717 17:13:19.263394   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.259671   22585 addons.go:69] Setting default-storageclass=true in profile "addons-435911"
	I0717 17:13:19.264009   22585 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-435911"
	I0717 17:13:19.266327   22585 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:13:19.266408   22585 addons.go:69] Setting inspektor-gadget=true in profile "addons-435911"
	I0717 17:13:19.266431   22585 addons.go:234] Setting addon inspektor-gadget=true in "addons-435911"
	I0717 17:13:19.266454   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.266806   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.266824   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.267759   22585 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-435911"
	I0717 17:13:19.267789   22585 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-435911"
	I0717 17:13:19.261189   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.267881   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.268127   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.268143   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.282079   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45619
	I0717 17:13:19.282617   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.283127   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.283149   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.283482   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.284037   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.284080   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.286386   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I0717 17:13:19.286572   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I0717 17:13:19.286945   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.287676   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.287692   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.288038   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.288221   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.288425   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34675
	I0717 17:13:19.288882   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.288896   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0717 17:13:19.288984   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.289213   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.289405   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.289430   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.289671   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.289719   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.289820   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.289841   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.289907   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.290084   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.290117   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.290252   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.290492   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.290523   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.290603   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.290641   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.290911   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39115
	I0717 17:13:19.291183   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.291283   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.291322   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.291603   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.291627   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.292184   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.292576   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0717 17:13:19.297304   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.297338   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.297633   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.297634   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.297651   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.297656   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.297994   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.298014   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.298316   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.298344   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.309290   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39069
	I0717 17:13:19.309603   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.309723   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.310661   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.310684   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.310935   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.310951   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.311124   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.311359   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.311818   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.311855   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.313465   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.313512   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.318988   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0717 17:13:19.319555   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.320083   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.320099   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.320420   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.320566   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.322454   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.324848   22585 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 17:13:19.325998   22585 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 17:13:19.326016   22585 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 17:13:19.326037   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.328268   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45953
	I0717 17:13:19.328782   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.329329   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.329345   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.329397   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.329423   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.329438   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.329850   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.329885   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.330033   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.330214   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.330377   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.330855   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.330901   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.333974   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
	I0717 17:13:19.337688   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35393
	I0717 17:13:19.337749   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I0717 17:13:19.338082   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39675
	I0717 17:13:19.338221   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.338255   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.338472   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.338904   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.338923   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.338995   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.339011   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.339048   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.339059   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.339437   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.339472   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.340012   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.340037   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.340051   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.340069   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.340597   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36499
	I0717 17:13:19.340916   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.341025   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.342985   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
	I0717 17:13:19.343072   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.343088   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.343100   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.343741   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.343941   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.344006   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.345248   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.345266   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.345334   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.345821   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.345837   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.346465   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.346998   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.347562   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40899
	I0717 17:13:19.347680   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.347769   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.348138   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.348619   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.349376   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.349395   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.350011   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 17:13:19.350365   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.350366   22585 addons.go:234] Setting addon default-storageclass=true in "addons-435911"
	I0717 17:13:19.350421   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.350747   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.350784   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.350952   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.352520   22585 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 17:13:19.352876   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34857
	I0717 17:13:19.352893   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 17:13:19.353232   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.353676   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.353699   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.353992   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.354296   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.354367   22585 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 17:13:19.354379   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 17:13:19.354392   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.354649   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.355605   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 17:13:19.356409   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.357197   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44721
	I0717 17:13:19.357847   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.357869   22585 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-435911"
	I0717 17:13:19.357905   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:19.358282   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.358327   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.358396   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.358521   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0717 17:13:19.358619   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 17:13:19.358664   22585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 17:13:19.358899   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.358920   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.358938   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.359094   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.359137   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.359336   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.359470   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.359737   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44405
	I0717 17:13:19.359973   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.359990   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.360047   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.360518   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.360535   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.360748   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.360761   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.360812   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.361206   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.361215   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.361519   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.362017   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 17:13:19.362078   22585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 17:13:19.362375   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.362418   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.362829   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.364150   22585 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 17:13:19.364204   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 17:13:19.364224   22585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 17:13:19.365568   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.365612   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.365669   22585 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 17:13:19.365689   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 17:13:19.365705   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.366040   22585 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 17:13:19.366060   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 17:13:19.366076   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.367349   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 17:13:19.368560   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 17:13:19.369573   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I0717 17:13:19.369828   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 17:13:19.369850   22585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 17:13:19.369869   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.370312   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.372671   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.375717   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.375729   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.375730   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.375751   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43531
	I0717 17:13:19.375735   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.375811   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.375816   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.375830   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.375719   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.375850   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.375902   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.375920   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.376120   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.376125   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.376170   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.376362   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.376400   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.376439   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.376488   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.376563   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.376937   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.376962   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.376990   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.377006   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.377155   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.377391   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.377520   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.377583   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.377950   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.377965   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.378005   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.379470   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.381473   22585 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 17:13:19.382839   22585 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 17:13:19.382853   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44365
	I0717 17:13:19.382860   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 17:13:19.382877   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.383464   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46879
	I0717 17:13:19.383761   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.384381   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.384402   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.384559   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.384776   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.384990   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.385134   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.385156   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.386149   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.386554   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.386588   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.386764   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.386938   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.386985   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.387097   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.387220   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:19.387233   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:19.387230   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.387643   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:19.387654   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:19.387662   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:19.387668   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:19.389277   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:19.389281   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:19.389297   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	W0717 17:13:19.389396   22585 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0717 17:13:19.389967   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.390131   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.391949   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.395741   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46411
	I0717 17:13:19.395742   22585 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 17:13:19.396266   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.396795   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.396822   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.397225   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.397773   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
	I0717 17:13:19.397819   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.397842   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.397950   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33921
	I0717 17:13:19.398166   22585 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 17:13:19.398183   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 17:13:19.398200   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.398428   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44489
	I0717 17:13:19.398632   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.398652   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.398967   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.399153   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.399176   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.399651   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.399674   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.399768   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.399787   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.399984   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.400109   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.400159   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.400299   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.401004   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.401212   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.402182   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.402259   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.403187   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.403188   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.403224   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.403395   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.403438   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.403663   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.403848   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.403988   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.404131   22585 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 17:13:19.404338   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34351
	I0717 17:13:19.404837   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.404960   22585 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 17:13:19.405361   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.405389   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.405780   22585 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 17:13:19.406202   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.406748   22585 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 17:13:19.406769   22585 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 17:13:19.406781   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:19.406789   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.407228   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:19.407674   22585 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 17:13:19.407690   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 17:13:19.407882   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.408737   22585 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 17:13:19.410191   22585 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 17:13:19.410209   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0717 17:13:19.410226   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.411776   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.412263   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.412293   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.412431   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.412498   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.413187   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.413206   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.413391   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.413500   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40319
	I0717 17:13:19.413677   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.413827   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.414601   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.414646   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.414733   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.415399   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.415417   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.415613   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.415771   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.415918   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.416046   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.416812   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.416829   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.417780   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.418093   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.418161   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.418370   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.418799   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.420400   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.421114   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
	I0717 17:13:19.421694   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.422238   22585 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 17:13:19.422297   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.422312   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.422688   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.422756   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45781
	I0717 17:13:19.422905   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.423162   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.423493   22585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 17:13:19.423508   22585 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 17:13:19.423525   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.424511   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.424533   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.424602   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.424844   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.425048   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.425467   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42253
	I0717 17:13:19.425791   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:19.426146   22585 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 17:13:19.426307   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:19.426327   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:19.426654   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:19.426760   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.426823   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:19.427368   22585 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 17:13:19.427382   22585 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 17:13:19.427395   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.428092   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:19.428347   22585 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 17:13:19.428417   22585 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 17:13:19.428429   22585 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 17:13:19.428445   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.428831   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.429805   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.429832   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.430022   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.430165   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.430336   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.430611   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.431198   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.431385   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.431688   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.431708   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.431714   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.431729   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.431910   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.432042   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.432063   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.432181   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.432219   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.432316   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.432359   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.432475   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.432905   22585 out.go:177]   - Using image docker.io/busybox:stable
	W0717 17:13:19.433505   22585 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56152->192.168.39.27:22: read: connection reset by peer
	I0717 17:13:19.433528   22585 retry.go:31] will retry after 239.941694ms: ssh: handshake failed: read tcp 192.168.39.1:56152->192.168.39.27:22: read: connection reset by peer
	W0717 17:13:19.433571   22585 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56166->192.168.39.27:22: read: connection reset by peer
	I0717 17:13:19.433576   22585 retry.go:31] will retry after 252.999752ms: ssh: handshake failed: read tcp 192.168.39.1:56166->192.168.39.27:22: read: connection reset by peer
	I0717 17:13:19.434584   22585 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 17:13:19.434605   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 17:13:19.434616   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:19.437442   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.437817   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:19.437843   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:19.438000   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:19.438178   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:19.438342   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:19.438483   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:19.705863   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 17:13:19.744518   22585 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:13:19.744758   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 17:13:19.766595   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 17:13:19.766614   22585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 17:13:19.835203   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 17:13:19.872004   22585 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 17:13:19.872029   22585 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 17:13:19.876973   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 17:13:19.887699   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 17:13:19.902999   22585 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 17:13:19.903017   22585 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 17:13:19.909574   22585 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 17:13:19.909595   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 17:13:19.960139   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 17:13:19.978621   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 17:13:19.978648   22585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 17:13:19.980168   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 17:13:19.991622   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 17:13:20.019750   22585 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 17:13:20.019775   22585 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 17:13:20.075569   22585 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 17:13:20.075593   22585 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 17:13:20.086721   22585 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 17:13:20.086738   22585 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 17:13:20.116281   22585 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 17:13:20.116301   22585 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 17:13:20.181868   22585 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 17:13:20.181889   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 17:13:20.186233   22585 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 17:13:20.186253   22585 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 17:13:20.199425   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 17:13:20.199444   22585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 17:13:20.266547   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 17:13:20.266569   22585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 17:13:20.293266   22585 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 17:13:20.293289   22585 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 17:13:20.317762   22585 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 17:13:20.317783   22585 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 17:13:20.357495   22585 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 17:13:20.357521   22585 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 17:13:20.376508   22585 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 17:13:20.376532   22585 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 17:13:20.441215   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 17:13:20.449073   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 17:13:20.449098   22585 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 17:13:20.471978   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 17:13:20.508928   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 17:13:20.511076   22585 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 17:13:20.511097   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 17:13:20.533950   22585 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 17:13:20.533978   22585 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 17:13:20.622759   22585 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 17:13:20.622787   22585 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 17:13:20.665467   22585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 17:13:20.665495   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 17:13:20.736531   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 17:13:20.807876   22585 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 17:13:20.807905   22585 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 17:13:20.953250   22585 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 17:13:20.953273   22585 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 17:13:21.071942   22585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 17:13:21.071967   22585 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 17:13:21.116194   22585 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 17:13:21.116224   22585 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 17:13:21.127859   22585 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 17:13:21.127876   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 17:13:21.264815   22585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 17:13:21.264838   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 17:13:21.268500   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 17:13:21.277441   22585 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 17:13:21.277461   22585 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 17:13:21.470115   22585 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 17:13:21.470142   22585 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 17:13:21.494235   22585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 17:13:21.494253   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 17:13:21.658226   22585 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 17:13:21.658248   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 17:13:21.764123   22585 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 17:13:21.764142   22585 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 17:13:21.909230   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 17:13:22.015063   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 17:13:26.400207   22585 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 17:13:26.400241   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:26.403825   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:26.404251   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:26.404276   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:26.404469   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:26.404698   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:26.404872   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:26.405050   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:26.631674   22585 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 17:13:26.676016   22585 addons.go:234] Setting addon gcp-auth=true in "addons-435911"
	I0717 17:13:26.676071   22585 host.go:66] Checking if "addons-435911" exists ...
	I0717 17:13:26.676400   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:26.676428   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:26.691144   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36445
	I0717 17:13:26.691581   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:26.692067   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:26.692094   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:26.692389   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:26.692939   22585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:13:26.692992   22585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:13:26.708917   22585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37497
	I0717 17:13:26.709317   22585 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:13:26.709805   22585 main.go:141] libmachine: Using API Version  1
	I0717 17:13:26.709835   22585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:13:26.710127   22585 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:13:26.710355   22585 main.go:141] libmachine: (addons-435911) Calling .GetState
	I0717 17:13:26.711966   22585 main.go:141] libmachine: (addons-435911) Calling .DriverName
	I0717 17:13:26.712193   22585 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 17:13:26.712218   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHHostname
	I0717 17:13:26.715201   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:26.715628   22585 main.go:141] libmachine: (addons-435911) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:64:f4", ip: ""} in network mk-addons-435911: {Iface:virbr1 ExpiryTime:2024-07-17 18:12:34 +0000 UTC Type:0 Mac:52:54:00:9b:64:f4 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:addons-435911 Clientid:01:52:54:00:9b:64:f4}
	I0717 17:13:26.715680   22585 main.go:141] libmachine: (addons-435911) DBG | domain addons-435911 has defined IP address 192.168.39.27 and MAC address 52:54:00:9b:64:f4 in network mk-addons-435911
	I0717 17:13:26.715803   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHPort
	I0717 17:13:26.716001   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHKeyPath
	I0717 17:13:26.716158   22585 main.go:141] libmachine: (addons-435911) Calling .GetSSHUsername
	I0717 17:13:26.716309   22585 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/addons-435911/id_rsa Username:docker}
	I0717 17:13:27.274764   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.568868073s)
	I0717 17:13:27.274813   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.274825   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.274828   22585 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.530279475s)
	I0717 17:13:27.274956   22585 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.530161645s)
	I0717 17:13:27.274982   22585 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 17:13:27.275037   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.439806821s)
	I0717 17:13:27.275078   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275089   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275126   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.398124786s)
	I0717 17:13:27.275164   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275177   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.275189   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.275202   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.387476885s)
	I0717 17:13:27.275209   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.275220   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275222   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275229   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275232   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275264   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275265   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.315095591s)
	I0717 17:13:27.275286   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275297   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275333   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.295140583s)
	I0717 17:13:27.275356   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275364   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275372   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.283728421s)
	I0717 17:13:27.275390   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275397   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275422   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.834175132s)
	I0717 17:13:27.275436   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275444   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275457   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.803450658s)
	I0717 17:13:27.275472   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275481   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275533   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.766565698s)
	I0717 17:13:27.275548   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275555   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275613   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.539054556s)
	I0717 17:13:27.275627   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275636   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275750   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.007224101s)
	W0717 17:13:27.275779   22585 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 17:13:27.275798   22585 retry.go:31] will retry after 309.615159ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 17:13:27.275823   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.275856   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.275863   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.275871   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275881   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.275871   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.366613453s)
	I0717 17:13:27.275929   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.275936   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.276220   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.276230   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.276238   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.276245   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.276302   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.276320   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.276325   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.276332   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.276338   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.276498   22585 node_ready.go:35] waiting up to 6m0s for node "addons-435911" to be "Ready" ...
	I0717 17:13:27.276573   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.276590   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.276610   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.276628   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.276634   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.276641   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.278202   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.278213   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.278222   22585 addons.go:475] Verifying addon ingress=true in "addons-435911"
	I0717 17:13:27.279592   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.279611   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.279621   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.279643   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.279650   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.279658   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.279665   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.279712   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.279718   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.279724   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.279731   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.279770   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.279788   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.279809   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.279820   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.279828   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.279834   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.279878   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.279886   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.279893   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.279899   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.279935   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.279955   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.279961   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.279968   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.279975   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.280018   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.280038   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.280043   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.280051   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.280057   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.280198   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.280233   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.280242   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.280657   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.280682   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.280689   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.281566   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.281634   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.281642   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.281743   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.281763   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.281770   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.281918   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.281937   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.281951   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.281960   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.281966   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.281969   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.281977   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.281983   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.281989   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.279599   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.282099   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.282103   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.282111   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.282121   22585 addons.go:475] Verifying addon metrics-server=true in "addons-435911"
	I0717 17:13:27.282122   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.282134   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.282142   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.282145   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.282150   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.282215   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.282222   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.282547   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.282563   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.282573   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.282585   22585 addons.go:475] Verifying addon registry=true in "addons-435911"
	I0717 17:13:27.283871   22585 out.go:177] * Verifying ingress addon...
	I0717 17:13:27.283889   22585 out.go:177] * Verifying registry addon...
	I0717 17:13:27.283873   22585 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-435911 service yakd-dashboard -n yakd-dashboard
	
	I0717 17:13:27.285947   22585 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 17:13:27.286461   22585 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 17:13:27.298956   22585 node_ready.go:49] node "addons-435911" has status "Ready":"True"
	I0717 17:13:27.298974   22585 node_ready.go:38] duration metric: took 22.455685ms for node "addons-435911" to be "Ready" ...
	I0717 17:13:27.298984   22585 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 17:13:27.343374   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.343394   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.343651   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:27.343675   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:27.343799   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.343814   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:27.343835   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:27.343841   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:27.343848   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	W0717 17:13:27.343926   22585 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0717 17:13:27.353750   22585 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 17:13:27.353772   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:27.353899   22585 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 17:13:27.353913   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:27.367217   22585 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g8svc" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.401392   22585 pod_ready.go:92] pod "coredns-7db6d8ff4d-g8svc" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:27.401410   22585 pod_ready.go:81] duration metric: took 34.173211ms for pod "coredns-7db6d8ff4d-g8svc" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.401420   22585 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ktksd" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.486557   22585 pod_ready.go:92] pod "coredns-7db6d8ff4d-ktksd" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:27.486586   22585 pod_ready.go:81] duration metric: took 85.16004ms for pod "coredns-7db6d8ff4d-ktksd" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.486597   22585 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.502686   22585 pod_ready.go:92] pod "etcd-addons-435911" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:27.502710   22585 pod_ready.go:81] duration metric: took 16.106815ms for pod "etcd-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.502724   22585 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.519688   22585 pod_ready.go:92] pod "kube-apiserver-addons-435911" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:27.519707   22585 pod_ready.go:81] duration metric: took 16.977944ms for pod "kube-apiserver-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.519717   22585 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.585902   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 17:13:27.679243   22585 pod_ready.go:92] pod "kube-controller-manager-addons-435911" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:27.679265   22585 pod_ready.go:81] duration metric: took 159.541766ms for pod "kube-controller-manager-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.679283   22585 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s2kxf" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:27.779761   22585 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-435911" context rescaled to 1 replicas
	I0717 17:13:27.810226   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:27.812112   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:28.095268   22585 pod_ready.go:92] pod "kube-proxy-s2kxf" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:28.095289   22585 pod_ready.go:81] duration metric: took 416.000282ms for pod "kube-proxy-s2kxf" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:28.095317   22585 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:28.130754   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.115637868s)
	I0717 17:13:28.130793   22585 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.418582136s)
	I0717 17:13:28.130803   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:28.130817   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:28.131188   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:28.131236   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:28.131260   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:28.131273   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:28.131246   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:28.131487   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:28.131523   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:28.131537   22585 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-435911"
	I0717 17:13:28.132340   22585 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 17:13:28.133142   22585 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 17:13:28.134684   22585 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 17:13:28.135424   22585 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 17:13:28.136073   22585 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 17:13:28.136088   22585 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 17:13:28.162119   22585 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 17:13:28.162140   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:28.218709   22585 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 17:13:28.218728   22585 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 17:13:28.239543   22585 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 17:13:28.239564   22585 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 17:13:28.257853   22585 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 17:13:28.294404   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:28.294678   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:28.481026   22585 pod_ready.go:92] pod "kube-scheduler-addons-435911" in "kube-system" namespace has status "Ready":"True"
	I0717 17:13:28.481054   22585 pod_ready.go:81] duration metric: took 385.728782ms for pod "kube-scheduler-addons-435911" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:28.481068   22585 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace to be "Ready" ...
	I0717 17:13:28.643468   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:28.793716   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:28.796515   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:29.141494   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:29.291068   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:29.291634   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:29.406316   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.82036948s)
	I0717 17:13:29.406376   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:29.406393   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:29.406640   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:29.406678   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:29.406692   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:29.406700   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:29.406909   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:29.406925   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:29.661205   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:29.817221   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:29.817281   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:29.867825   22585 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.60993507s)
	I0717 17:13:29.867881   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:29.867891   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:29.868165   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:29.868183   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:29.868206   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:29.868270   22585 main.go:141] libmachine: Making call to close driver server
	I0717 17:13:29.868277   22585 main.go:141] libmachine: (addons-435911) Calling .Close
	I0717 17:13:29.868504   22585 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:13:29.868515   22585 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:13:29.868558   22585 main.go:141] libmachine: (addons-435911) DBG | Closing plugin on server side
	I0717 17:13:29.869889   22585 addons.go:475] Verifying addon gcp-auth=true in "addons-435911"
	I0717 17:13:29.871434   22585 out.go:177] * Verifying gcp-auth addon...
	I0717 17:13:29.873679   22585 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 17:13:29.895767   22585 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 17:13:29.895799   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:30.140660   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:30.292316   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:30.294170   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:30.387469   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:30.488638   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:30.643972   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:30.792329   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:30.792989   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:30.884719   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:31.140493   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:31.291776   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:31.291899   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:31.377684   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:31.640813   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:31.791547   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:31.793109   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:31.878065   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:32.140249   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:32.289968   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:32.291541   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:32.377561   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:32.641818   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:32.791237   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:32.791487   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:32.877163   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:32.988750   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:33.141137   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:33.290168   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:33.291866   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:33.377813   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:33.640182   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:33.791000   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:33.791493   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:33.877709   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:34.141092   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:34.290754   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:34.292652   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:34.376819   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:34.648378   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:34.791007   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:34.791366   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:34.877122   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:35.140649   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:35.298060   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:35.303637   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:35.377875   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:35.486118   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:35.648598   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:35.790916   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:35.792643   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:35.877316   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:36.140931   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:36.290769   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:36.291319   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:36.377423   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:36.771937   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:36.791995   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:36.794035   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:36.877253   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:37.142416   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:37.294458   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:37.295899   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:37.378105   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:37.486808   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:37.641006   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:37.789878   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:37.791006   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:37.876497   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:38.140920   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:38.297672   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:38.297829   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:38.377549   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:38.641027   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:38.790236   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:38.792740   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:38.877179   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:39.140385   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:39.292467   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:39.292795   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:39.377798   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:39.640313   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:39.792764   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:39.792918   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:39.878364   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:39.985831   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:40.141468   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:40.294604   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:40.294830   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:40.377851   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:40.641101   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:40.791642   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:40.792377   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:40.877812   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:41.141303   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:41.290479   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:41.293165   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:41.377860   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:41.641829   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:41.801163   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:41.802323   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:41.877739   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:41.986671   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:42.142097   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:42.560308   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:42.561443   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:42.561725   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:42.641199   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:42.793545   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:42.795432   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:42.877086   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:43.143660   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:43.289529   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:43.295135   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:43.376834   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:43.640875   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:43.796874   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:43.797541   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:43.877739   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:43.988681   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:44.141867   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:44.291643   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:44.293206   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:44.590560   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:44.776929   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:44.797756   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:44.798011   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:44.877233   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:45.141251   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:45.293243   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:45.296349   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:45.378630   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:45.643857   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:45.790896   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:45.791135   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:45.876899   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:46.141163   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:46.290132   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:46.291580   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:46.377495   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:46.486223   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:46.641310   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:46.790910   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:46.791636   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:46.877656   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:47.141596   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:47.291688   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:47.291848   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:47.377976   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:47.641520   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:47.791877   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:47.792025   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:47.877726   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:48.140656   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:48.291771   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:48.292749   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:48.377088   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:48.492345   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:48.640688   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:48.795009   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:48.795429   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:48.884648   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:49.140976   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:49.290067   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:49.292506   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:49.376491   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:49.641334   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:49.810672   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:49.812017   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:49.877727   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:50.140575   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:50.292112   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:50.293444   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:50.380328   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:50.640391   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:50.790943   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:50.792356   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:50.877250   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:50.987963   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:51.140781   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:51.291408   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:51.294802   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:51.377933   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:51.640685   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:51.790095   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:51.791560   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:51.880253   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:52.140966   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:52.291218   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:52.291223   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:52.376811   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:52.641590   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:52.792037   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:52.792285   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:52.883179   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:52.990905   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:53.141198   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:53.290578   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:53.292500   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:53.377572   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:53.640309   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:53.790801   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:53.792375   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:53.882499   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:54.141835   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:54.294795   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:54.296095   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:54.377916   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:54.640235   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:54.792333   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:54.792572   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:54.877876   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:55.141931   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:55.292715   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:55.295442   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:55.377448   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:55.486975   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:55.652236   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:55.791258   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:55.791549   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:55.877657   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:56.140899   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:56.291456   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:56.292166   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:56.376882   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:56.641246   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:56.791854   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:56.793108   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:56.876833   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:57.140100   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:57.290534   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:57.292058   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:57.376837   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:57.642749   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:57.791226   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:57.791496   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:57.877123   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:57.991113   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:13:58.140185   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:58.290893   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:58.290952   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:58.378281   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:58.641102   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:58.791402   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:58.791745   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:58.880000   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:59.488939   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:59.489830   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:59.490355   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:13:59.494910   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:59.642203   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:13:59.794706   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:13:59.794878   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:13:59.877389   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:00.139974   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:00.292113   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:00.292933   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:00.377028   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:00.486514   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:00.641482   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:00.790518   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:00.790878   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:00.877583   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:01.140615   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:01.289607   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:01.291384   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:01.377854   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:01.640127   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:01.792069   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:01.793178   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:01.877984   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:02.140697   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:02.289348   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:02.291741   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:02.377508   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:02.653068   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:02.791658   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:02.791806   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:02.876654   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:02.986552   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:03.139878   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:03.291004   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:03.291392   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:03.378056   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:03.640806   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:03.791111   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:03.791328   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:03.876966   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:04.140644   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:04.289618   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:04.291809   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:04.378790   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:04.953821   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:04.953898   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:04.953901   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:04.953966   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:04.991276   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:05.141477   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:05.291839   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:05.292506   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:05.377501   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:05.640404   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:05.792436   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:05.792643   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 17:14:05.877250   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:06.141004   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:06.290815   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:06.291521   22585 kapi.go:107] duration metric: took 39.005059103s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 17:14:06.377255   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:06.641789   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:06.790294   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:06.877269   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:07.140280   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:07.290103   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:07.381226   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:07.487094   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:07.641652   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:07.790703   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:07.877286   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:08.140964   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:08.296229   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:08.380241   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:08.640796   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:08.791121   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:08.878665   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:09.145575   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:09.292026   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:09.378606   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:09.487251   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:09.640564   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:09.791554   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:09.970621   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:10.140810   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:10.290435   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:10.377285   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:10.640583   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:10.789501   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:10.877544   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:11.141561   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:11.290667   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:11.377412   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:11.640310   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:11.790900   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:11.878337   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:11.987119   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:12.141130   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:12.290424   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:12.377844   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:12.640376   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:12.791383   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:12.876964   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:13.189503   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:13.290311   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:13.377807   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:13.641285   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:13.790517   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:13.877891   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:13.987714   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:14.140449   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:14.291024   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:14.378105   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:14.639788   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:14.790928   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:14.878117   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:15.141089   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:15.291491   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:15.380415   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:15.640205   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:15.790896   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:15.877961   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:16.141151   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:16.290447   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:16.378192   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:16.488690   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:16.646594   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:16.792529   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:16.877721   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:17.140675   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:17.289650   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:17.376746   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:17.641153   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:17.790340   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:17.879465   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:18.141437   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:18.290359   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:18.378938   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:18.640524   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:18.790656   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:18.877751   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:18.986359   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:19.149834   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:19.292731   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:19.377842   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:19.641049   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:19.790575   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:19.877582   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:20.141617   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:20.291749   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:20.378141   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:20.640984   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:20.791767   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:20.879559   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:20.986772   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:21.140539   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:21.291069   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:21.377464   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:21.640652   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:21.792323   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:21.878670   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:22.141248   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:22.290616   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:22.376864   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:22.641393   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:22.790412   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:22.878253   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:22.987010   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:23.141356   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:23.487504   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:23.490091   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:23.641355   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:23.790142   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:23.877803   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:24.141169   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:24.290610   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:24.376855   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:24.639653   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:24.789876   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:24.877829   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:25.140425   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:25.290673   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:25.377683   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:25.490099   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:25.640631   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:25.794052   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:25.878116   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:26.140412   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:26.291304   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:26.377696   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:26.640414   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:26.790699   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:26.877644   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:27.418126   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:27.420384   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:27.421097   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:27.643481   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:27.790547   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:27.888655   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:27.988750   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:28.140486   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:28.291220   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:28.385544   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:28.640746   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:28.791390   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:28.877663   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:29.141223   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:29.291310   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:29.380113   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:29.647933   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:29.792217   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:29.881418   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:30.141220   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:30.291343   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:30.377167   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:30.487613   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:30.639971   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:30.790285   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:30.876651   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:31.141066   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:31.290291   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:31.376956   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:31.648489   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:31.790861   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:31.878623   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:32.140447   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:32.290863   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:32.378550   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:32.495069   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:32.640902   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:32.790968   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:32.878870   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:33.141426   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:33.290874   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:33.377887   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:33.640768   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:33.791424   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:33.876624   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:34.145749   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:34.292080   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:34.377751   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:34.663499   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:34.791489   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:34.879404   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:34.986757   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:35.141070   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:35.290916   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:35.379037   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:35.641916   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:35.791734   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:35.877783   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:36.141381   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:36.290517   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:36.377155   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:36.640827   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:36.790673   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:36.877777   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:37.140627   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:37.290436   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:37.378829   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:37.486813   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:37.641257   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:37.790503   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:37.887156   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:38.140572   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:38.290700   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:38.377673   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:38.641654   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:38.791028   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:38.877256   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:39.141070   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:39.290055   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:39.377772   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:39.486910   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:39.640219   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:39.790385   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:39.877587   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:40.143175   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:40.292702   22585 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 17:14:40.381640   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:40.640495   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:40.790741   22585 kapi.go:107] duration metric: took 1m13.504794123s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 17:14:40.877249   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:41.140746   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:41.377393   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:41.655015   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:41.877757   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:41.986472   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:42.141038   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:42.377489   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:42.640631   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:42.877603   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:43.141542   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:43.377722   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:43.640393   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:43.876744   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:43.987203   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:44.140797   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:44.376970   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:44.641095   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:44.877136   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:45.140187   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:45.376905   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 17:14:45.640785   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:45.878006   22585 kapi.go:107] duration metric: took 1m16.004325711s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 17:14:45.879791   22585 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-435911 cluster.
	I0717 17:14:45.881165   22585 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 17:14:45.882381   22585 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 17:14:46.142780   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:46.486972   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:46.640607   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:47.140932   22585 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 17:14:47.640345   22585 kapi.go:107] duration metric: took 1m19.504917945s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 17:14:47.642128   22585 out.go:177] * Enabled addons: storage-provisioner, inspektor-gadget, ingress-dns, nvidia-device-plugin, helm-tiller, metrics-server, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0717 17:14:47.643397   22585 addons.go:510] duration metric: took 1m28.383848509s for enable addons: enabled=[storage-provisioner inspektor-gadget ingress-dns nvidia-device-plugin helm-tiller metrics-server cloud-spanner yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0717 17:14:48.567610   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:50.986936   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:53.487336   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:55.488764   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:14:57.986596   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:15:00.487301   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:15:02.487992   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:15:04.986282   22585 pod_ready.go:102] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"False"
	I0717 17:15:05.987028   22585 pod_ready.go:92] pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace has status "Ready":"True"
	I0717 17:15:05.987049   22585 pod_ready.go:81] duration metric: took 1m37.505973488s for pod "metrics-server-c59844bb4-qfn6h" in "kube-system" namespace to be "Ready" ...
	I0717 17:15:05.987060   22585 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xst8q" in "kube-system" namespace to be "Ready" ...
	I0717 17:15:05.990897   22585 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-xst8q" in "kube-system" namespace has status "Ready":"True"
	I0717 17:15:05.990914   22585 pod_ready.go:81] duration metric: took 3.847877ms for pod "nvidia-device-plugin-daemonset-xst8q" in "kube-system" namespace to be "Ready" ...
	I0717 17:15:05.990930   22585 pod_ready.go:38] duration metric: took 1m38.691935933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 17:15:05.990947   22585 api_server.go:52] waiting for apiserver process to appear ...
	I0717 17:15:05.991001   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 17:15:05.991055   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 17:15:06.040532   22585 cri.go:89] found id: "fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0"
	I0717 17:15:06.040562   22585 cri.go:89] found id: ""
	I0717 17:15:06.040570   22585 logs.go:276] 1 containers: [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0]
	I0717 17:15:06.040632   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:06.044413   22585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 17:15:06.044470   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 17:15:06.079775   22585 cri.go:89] found id: "8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301"
	I0717 17:15:06.079800   22585 cri.go:89] found id: ""
	I0717 17:15:06.079808   22585 logs.go:276] 1 containers: [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301]
	I0717 17:15:06.079869   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:06.083396   22585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 17:15:06.083449   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 17:15:06.120734   22585 cri.go:89] found id: "65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3"
	I0717 17:15:06.120752   22585 cri.go:89] found id: ""
	I0717 17:15:06.120759   22585 logs.go:276] 1 containers: [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3]
	I0717 17:15:06.120801   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:06.124643   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 17:15:06.124711   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 17:15:06.169611   22585 cri.go:89] found id: "e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49"
	I0717 17:15:06.169631   22585 cri.go:89] found id: ""
	I0717 17:15:06.169640   22585 logs.go:276] 1 containers: [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49]
	I0717 17:15:06.169698   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:06.175354   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 17:15:06.175410   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 17:15:06.216006   22585 cri.go:89] found id: "e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e"
	I0717 17:15:06.216024   22585 cri.go:89] found id: ""
	I0717 17:15:06.216031   22585 logs.go:276] 1 containers: [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e]
	I0717 17:15:06.216073   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:06.220002   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 17:15:06.220057   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 17:15:06.257959   22585 cri.go:89] found id: "9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f"
	I0717 17:15:06.257978   22585 cri.go:89] found id: ""
	I0717 17:15:06.257985   22585 logs.go:276] 1 containers: [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f]
	I0717 17:15:06.258030   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:06.261743   22585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 17:15:06.261798   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 17:15:06.296423   22585 cri.go:89] found id: ""
	I0717 17:15:06.296451   22585 logs.go:276] 0 containers: []
	W0717 17:15:06.296462   22585 logs.go:278] No container was found matching "kindnet"
	I0717 17:15:06.296471   22585 logs.go:123] Gathering logs for kube-proxy [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e] ...
	I0717 17:15:06.296483   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e"
	I0717 17:15:06.333111   22585 logs.go:123] Gathering logs for CRI-O ...
	I0717 17:15:06.333144   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 17:15:07.365377   22585 logs.go:123] Gathering logs for kubelet ...
	I0717 17:15:07.365425   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 17:15:07.419259   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: W0717 17:13:25.707621    1283 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.419487   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.707649    1283 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.421391   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: W0717 17:13:25.961993    1283 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.421543   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.962025    1283 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.423601   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141315    1283 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.423754   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141436    1283 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.423891   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141548    1283 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.424078   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141601    1283 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	I0717 17:15:07.449610   22585 logs.go:123] Gathering logs for dmesg ...
	I0717 17:15:07.449645   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 17:15:07.464490   22585 logs.go:123] Gathering logs for describe nodes ...
	I0717 17:15:07.464519   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 17:15:07.588650   22585 logs.go:123] Gathering logs for kube-apiserver [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0] ...
	I0717 17:15:07.588681   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0"
	I0717 17:15:07.647970   22585 logs.go:123] Gathering logs for etcd [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301] ...
	I0717 17:15:07.648002   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301"
	I0717 17:15:07.718776   22585 logs.go:123] Gathering logs for coredns [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3] ...
	I0717 17:15:07.718811   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3"
	I0717 17:15:07.756847   22585 logs.go:123] Gathering logs for kube-scheduler [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49] ...
	I0717 17:15:07.756886   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49"
	I0717 17:15:07.808408   22585 logs.go:123] Gathering logs for kube-controller-manager [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f] ...
	I0717 17:15:07.808439   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f"
	I0717 17:15:07.865958   22585 logs.go:123] Gathering logs for container status ...
	I0717 17:15:07.865990   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 17:15:07.910488   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:15:07.910520   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 17:15:07.910587   22585 out.go:239] X Problems detected in kubelet:
	W0717 17:15:07.910599   22585 out.go:239]   Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.962025    1283 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.910613   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141315    1283 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.910625   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141436    1283 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.910639   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141548    1283 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:07.910650   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141601    1283 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	I0717 17:15:07.910660   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:15:07.910670   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:15:17.912372   22585 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:15:17.946433   22585 api_server.go:72] duration metric: took 1m58.686913769s to wait for apiserver process to appear ...
	I0717 17:15:17.946462   22585 api_server.go:88] waiting for apiserver healthz status ...
	I0717 17:15:17.946498   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 17:15:17.946554   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 17:15:17.995751   22585 cri.go:89] found id: "fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0"
	I0717 17:15:17.995774   22585 cri.go:89] found id: ""
	I0717 17:15:17.995782   22585 logs.go:276] 1 containers: [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0]
	I0717 17:15:17.995835   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:18.000045   22585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 17:15:18.000108   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 17:15:18.052831   22585 cri.go:89] found id: "8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301"
	I0717 17:15:18.052857   22585 cri.go:89] found id: ""
	I0717 17:15:18.052867   22585 logs.go:276] 1 containers: [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301]
	I0717 17:15:18.052923   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:18.058072   22585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 17:15:18.058142   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 17:15:18.105473   22585 cri.go:89] found id: "65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3"
	I0717 17:15:18.105491   22585 cri.go:89] found id: ""
	I0717 17:15:18.105498   22585 logs.go:276] 1 containers: [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3]
	I0717 17:15:18.105542   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:18.109700   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 17:15:18.109777   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 17:15:18.152712   22585 cri.go:89] found id: "e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49"
	I0717 17:15:18.152735   22585 cri.go:89] found id: ""
	I0717 17:15:18.152743   22585 logs.go:276] 1 containers: [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49]
	I0717 17:15:18.152789   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:18.157009   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 17:15:18.157062   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 17:15:18.216897   22585 cri.go:89] found id: "e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e"
	I0717 17:15:18.216918   22585 cri.go:89] found id: ""
	I0717 17:15:18.216926   22585 logs.go:276] 1 containers: [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e]
	I0717 17:15:18.216989   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:18.221020   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 17:15:18.221081   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 17:15:18.269293   22585 cri.go:89] found id: "9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f"
	I0717 17:15:18.269320   22585 cri.go:89] found id: ""
	I0717 17:15:18.269330   22585 logs.go:276] 1 containers: [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f]
	I0717 17:15:18.269383   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:18.275239   22585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 17:15:18.275301   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 17:15:18.328171   22585 cri.go:89] found id: ""
	I0717 17:15:18.328199   22585 logs.go:276] 0 containers: []
	W0717 17:15:18.328208   22585 logs.go:278] No container was found matching "kindnet"
	I0717 17:15:18.328216   22585 logs.go:123] Gathering logs for describe nodes ...
	I0717 17:15:18.328230   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 17:15:18.477522   22585 logs.go:123] Gathering logs for coredns [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3] ...
	I0717 17:15:18.477555   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3"
	I0717 17:15:18.531422   22585 logs.go:123] Gathering logs for kube-proxy [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e] ...
	I0717 17:15:18.531453   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e"
	I0717 17:15:18.576084   22585 logs.go:123] Gathering logs for kube-controller-manager [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f] ...
	I0717 17:15:18.576112   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f"
	I0717 17:15:18.658322   22585 logs.go:123] Gathering logs for CRI-O ...
	I0717 17:15:18.658356   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 17:15:19.594678   22585 logs.go:123] Gathering logs for dmesg ...
	I0717 17:15:19.594713   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 17:15:19.608749   22585 logs.go:123] Gathering logs for kube-apiserver [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0] ...
	I0717 17:15:19.608778   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0"
	I0717 17:15:19.679044   22585 logs.go:123] Gathering logs for etcd [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301] ...
	I0717 17:15:19.679080   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301"
	I0717 17:15:19.787871   22585 logs.go:123] Gathering logs for kube-scheduler [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49] ...
	I0717 17:15:19.787899   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49"
	I0717 17:15:19.833819   22585 logs.go:123] Gathering logs for container status ...
	I0717 17:15:19.833844   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 17:15:19.883400   22585 logs.go:123] Gathering logs for kubelet ...
	I0717 17:15:19.883430   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 17:15:19.934522   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: W0717 17:13:25.707621    1283 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.934738   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.707649    1283 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.936554   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: W0717 17:13:25.961993    1283 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.936703   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.962025    1283 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.938699   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141315    1283 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.938853   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141436    1283 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.938987   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141548    1283 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.939141   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141601    1283 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	I0717 17:15:19.964558   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:15:19.964583   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 17:15:19.964631   22585 out.go:239] X Problems detected in kubelet:
	W0717 17:15:19.964642   22585 out.go:239]   Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.962025    1283 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.964654   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141315    1283 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.964667   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141436    1283 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.964676   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141548    1283 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:19.964682   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141601    1283 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	I0717 17:15:19.964688   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:15:19.964693   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:15:29.965835   22585 api_server.go:253] Checking apiserver healthz at https://192.168.39.27:8443/healthz ...
	I0717 17:15:29.972025   22585 api_server.go:279] https://192.168.39.27:8443/healthz returned 200:
	ok
	I0717 17:15:29.974152   22585 api_server.go:141] control plane version: v1.30.2
	I0717 17:15:29.974173   22585 api_server.go:131] duration metric: took 12.027705124s to wait for apiserver health ...
	I0717 17:15:29.974182   22585 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 17:15:29.974206   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 17:15:29.974254   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 17:15:30.011571   22585 cri.go:89] found id: "fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0"
	I0717 17:15:30.011602   22585 cri.go:89] found id: ""
	I0717 17:15:30.011611   22585 logs.go:276] 1 containers: [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0]
	I0717 17:15:30.011658   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:30.015694   22585 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 17:15:30.015746   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 17:15:30.058475   22585 cri.go:89] found id: "8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301"
	I0717 17:15:30.058500   22585 cri.go:89] found id: ""
	I0717 17:15:30.058508   22585 logs.go:276] 1 containers: [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301]
	I0717 17:15:30.058560   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:30.062635   22585 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 17:15:30.062699   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 17:15:30.100924   22585 cri.go:89] found id: "65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3"
	I0717 17:15:30.100960   22585 cri.go:89] found id: ""
	I0717 17:15:30.100970   22585 logs.go:276] 1 containers: [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3]
	I0717 17:15:30.101020   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:30.104842   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 17:15:30.104896   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 17:15:30.139813   22585 cri.go:89] found id: "e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49"
	I0717 17:15:30.139833   22585 cri.go:89] found id: ""
	I0717 17:15:30.139842   22585 logs.go:276] 1 containers: [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49]
	I0717 17:15:30.139891   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:30.143375   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 17:15:30.143420   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 17:15:30.183734   22585 cri.go:89] found id: "e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e"
	I0717 17:15:30.183760   22585 cri.go:89] found id: ""
	I0717 17:15:30.183770   22585 logs.go:276] 1 containers: [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e]
	I0717 17:15:30.183827   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:30.187742   22585 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 17:15:30.187797   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 17:15:30.225009   22585 cri.go:89] found id: "9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f"
	I0717 17:15:30.225034   22585 cri.go:89] found id: ""
	I0717 17:15:30.225043   22585 logs.go:276] 1 containers: [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f]
	I0717 17:15:30.225097   22585 ssh_runner.go:195] Run: which crictl
	I0717 17:15:30.229002   22585 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 17:15:30.229074   22585 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 17:15:30.264970   22585 cri.go:89] found id: ""
	I0717 17:15:30.264996   22585 logs.go:276] 0 containers: []
	W0717 17:15:30.265005   22585 logs.go:278] No container was found matching "kindnet"
	I0717 17:15:30.265015   22585 logs.go:123] Gathering logs for container status ...
	I0717 17:15:30.265029   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 17:15:30.316390   22585 logs.go:123] Gathering logs for kubelet ...
	I0717 17:15:30.316421   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 17:15:30.367990   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: W0717 17:13:25.707621    1283 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.368159   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.707649    1283 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.370077   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: W0717 17:13:25.961993    1283 reflector.go:547] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.370229   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.962025    1283 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.372199   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141315    1283 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.372348   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141436    1283 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.372482   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141548    1283 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:30.372632   22585 logs.go:138] Found kubelet problem: Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141601    1283 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	I0717 17:15:30.398511   22585 logs.go:123] Gathering logs for describe nodes ...
	I0717 17:15:30.398537   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 17:15:30.513573   22585 logs.go:123] Gathering logs for etcd [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301] ...
	I0717 17:15:30.513601   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301"
	I0717 17:15:30.578793   22585 logs.go:123] Gathering logs for kube-proxy [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e] ...
	I0717 17:15:30.578827   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e"
	I0717 17:15:30.615776   22585 logs.go:123] Gathering logs for kube-controller-manager [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f] ...
	I0717 17:15:30.615803   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f"
	I0717 17:15:30.681514   22585 logs.go:123] Gathering logs for CRI-O ...
	I0717 17:15:30.681552   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 17:15:31.549350   22585 logs.go:123] Gathering logs for dmesg ...
	I0717 17:15:31.549393   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 17:15:31.563477   22585 logs.go:123] Gathering logs for kube-apiserver [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0] ...
	I0717 17:15:31.563504   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0"
	I0717 17:15:31.613144   22585 logs.go:123] Gathering logs for coredns [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3] ...
	I0717 17:15:31.613170   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3"
	I0717 17:15:31.647893   22585 logs.go:123] Gathering logs for kube-scheduler [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49] ...
	I0717 17:15:31.647921   22585 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49"
	I0717 17:15:31.686036   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:15:31.686062   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 17:15:31.686113   22585 out.go:239] X Problems detected in kubelet:
	W0717 17:15:31.686122   22585 out.go:239]   Jul 17 17:13:25 addons-435911 kubelet[1283]: E0717 17:13:25.962025    1283 reflector.go:150] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-435911' and this object
	W0717 17:15:31.686132   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141315    1283 reflector.go:547] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:31.686143   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141436    1283 reflector.go:150] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-435911" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:31.686149   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: W0717 17:13:27.141548    1283 reflector.go:547] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	W0717 17:15:31.686158   22585 out.go:239]   Jul 17 17:13:27 addons-435911 kubelet[1283]: E0717 17:13:27.141601    1283 reflector.go:150] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-435911" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-435911' and this object
	I0717 17:15:31.686164   22585 out.go:304] Setting ErrFile to fd 2...
	I0717 17:15:31.686172   22585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:15:41.699123   22585 system_pods.go:59] 18 kube-system pods found
	I0717 17:15:41.699153   22585 system_pods.go:61] "coredns-7db6d8ff4d-ktksd" [68b98670-2ada-403b-9f7f-a712b7a3ace4] Running
	I0717 17:15:41.699157   22585 system_pods.go:61] "csi-hostpath-attacher-0" [72a7a273-f40b-4503-a6f4-00ff9385aeda] Running
	I0717 17:15:41.699161   22585 system_pods.go:61] "csi-hostpath-resizer-0" [e50d25c5-3dad-4b92-ba5b-1e5458ec91a1] Running
	I0717 17:15:41.699165   22585 system_pods.go:61] "csi-hostpathplugin-nnchn" [4379d8e7-b277-4b17-968f-98ee1a746757] Running
	I0717 17:15:41.699167   22585 system_pods.go:61] "etcd-addons-435911" [b91aac8f-3bf7-4acd-aa81-40cee5dcb0f4] Running
	I0717 17:15:41.699170   22585 system_pods.go:61] "kube-apiserver-addons-435911" [31459445-84ba-4687-b7d1-996c53960592] Running
	I0717 17:15:41.699173   22585 system_pods.go:61] "kube-controller-manager-addons-435911" [36229cb2-73ea-4d6d-8d4f-d43b8b91fcd2] Running
	I0717 17:15:41.699178   22585 system_pods.go:61] "kube-ingress-dns-minikube" [5ba15390-d48e-46dd-a033-94fc60c42981] Running
	I0717 17:15:41.699181   22585 system_pods.go:61] "kube-proxy-s2kxf" [3739bf30-2198-42bf-a1c6-c53e9bbfe970] Running
	I0717 17:15:41.699184   22585 system_pods.go:61] "kube-scheduler-addons-435911" [35d4b1a8-5360-448f-887f-073e3ae0301d] Running
	I0717 17:15:41.699187   22585 system_pods.go:61] "metrics-server-c59844bb4-qfn6h" [594c6a3c-368e-421e-9d3f-ceb3426c0cf7] Running
	I0717 17:15:41.699190   22585 system_pods.go:61] "nvidia-device-plugin-daemonset-xst8q" [a0449eb2-9a20-4b3a-b414-1a8ca2c38090] Running
	I0717 17:15:41.699192   22585 system_pods.go:61] "registry-656c9c8d9c-k8vqb" [b2c62d08-0816-405d-b5e4-78e70611f29b] Running
	I0717 17:15:41.699197   22585 system_pods.go:61] "registry-proxy-qxnzl" [a6c49b2c-06f8-4825-b8b7-d2233c0cb798] Running
	I0717 17:15:41.699201   22585 system_pods.go:61] "snapshot-controller-745499f584-j5jh5" [55e87176-4e97-4953-b593-ecae177e3403] Running
	I0717 17:15:41.699205   22585 system_pods.go:61] "snapshot-controller-745499f584-ppvbb" [68b3d0a0-cba2-4f65-9487-adf50c36096f] Running
	I0717 17:15:41.699208   22585 system_pods.go:61] "storage-provisioner" [055c9722-8252-48a5-9048-7fcbc3cf7a2b] Running
	I0717 17:15:41.699211   22585 system_pods.go:61] "tiller-deploy-6677d64bcd-4vwq8" [bb7ff47b-ce42-448a-bc9b-96324fdaac73] Running
	I0717 17:15:41.699216   22585 system_pods.go:74] duration metric: took 11.725028942s to wait for pod list to return data ...
	I0717 17:15:41.699226   22585 default_sa.go:34] waiting for default service account to be created ...
	I0717 17:15:41.701409   22585 default_sa.go:45] found service account: "default"
	I0717 17:15:41.701427   22585 default_sa.go:55] duration metric: took 2.195384ms for default service account to be created ...
	I0717 17:15:41.701434   22585 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 17:15:41.711246   22585 system_pods.go:86] 18 kube-system pods found
	I0717 17:15:41.711276   22585 system_pods.go:89] "coredns-7db6d8ff4d-ktksd" [68b98670-2ada-403b-9f7f-a712b7a3ace4] Running
	I0717 17:15:41.711281   22585 system_pods.go:89] "csi-hostpath-attacher-0" [72a7a273-f40b-4503-a6f4-00ff9385aeda] Running
	I0717 17:15:41.711286   22585 system_pods.go:89] "csi-hostpath-resizer-0" [e50d25c5-3dad-4b92-ba5b-1e5458ec91a1] Running
	I0717 17:15:41.711290   22585 system_pods.go:89] "csi-hostpathplugin-nnchn" [4379d8e7-b277-4b17-968f-98ee1a746757] Running
	I0717 17:15:41.711294   22585 system_pods.go:89] "etcd-addons-435911" [b91aac8f-3bf7-4acd-aa81-40cee5dcb0f4] Running
	I0717 17:15:41.711298   22585 system_pods.go:89] "kube-apiserver-addons-435911" [31459445-84ba-4687-b7d1-996c53960592] Running
	I0717 17:15:41.711304   22585 system_pods.go:89] "kube-controller-manager-addons-435911" [36229cb2-73ea-4d6d-8d4f-d43b8b91fcd2] Running
	I0717 17:15:41.711309   22585 system_pods.go:89] "kube-ingress-dns-minikube" [5ba15390-d48e-46dd-a033-94fc60c42981] Running
	I0717 17:15:41.711313   22585 system_pods.go:89] "kube-proxy-s2kxf" [3739bf30-2198-42bf-a1c6-c53e9bbfe970] Running
	I0717 17:15:41.711317   22585 system_pods.go:89] "kube-scheduler-addons-435911" [35d4b1a8-5360-448f-887f-073e3ae0301d] Running
	I0717 17:15:41.711321   22585 system_pods.go:89] "metrics-server-c59844bb4-qfn6h" [594c6a3c-368e-421e-9d3f-ceb3426c0cf7] Running
	I0717 17:15:41.711326   22585 system_pods.go:89] "nvidia-device-plugin-daemonset-xst8q" [a0449eb2-9a20-4b3a-b414-1a8ca2c38090] Running
	I0717 17:15:41.711330   22585 system_pods.go:89] "registry-656c9c8d9c-k8vqb" [b2c62d08-0816-405d-b5e4-78e70611f29b] Running
	I0717 17:15:41.711336   22585 system_pods.go:89] "registry-proxy-qxnzl" [a6c49b2c-06f8-4825-b8b7-d2233c0cb798] Running
	I0717 17:15:41.711339   22585 system_pods.go:89] "snapshot-controller-745499f584-j5jh5" [55e87176-4e97-4953-b593-ecae177e3403] Running
	I0717 17:15:41.711345   22585 system_pods.go:89] "snapshot-controller-745499f584-ppvbb" [68b3d0a0-cba2-4f65-9487-adf50c36096f] Running
	I0717 17:15:41.711349   22585 system_pods.go:89] "storage-provisioner" [055c9722-8252-48a5-9048-7fcbc3cf7a2b] Running
	I0717 17:15:41.711355   22585 system_pods.go:89] "tiller-deploy-6677d64bcd-4vwq8" [bb7ff47b-ce42-448a-bc9b-96324fdaac73] Running
	I0717 17:15:41.711362   22585 system_pods.go:126] duration metric: took 9.922561ms to wait for k8s-apps to be running ...
	I0717 17:15:41.711368   22585 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 17:15:41.711412   22585 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:15:41.729954   22585 system_svc.go:56] duration metric: took 18.574398ms WaitForService to wait for kubelet
	I0717 17:15:41.729987   22585 kubeadm.go:582] duration metric: took 2m22.470473505s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 17:15:41.730013   22585 node_conditions.go:102] verifying NodePressure condition ...
	I0717 17:15:41.732689   22585 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 17:15:41.732714   22585 node_conditions.go:123] node cpu capacity is 2
	I0717 17:15:41.732726   22585 node_conditions.go:105] duration metric: took 2.707848ms to run NodePressure ...
	I0717 17:15:41.732736   22585 start.go:241] waiting for startup goroutines ...
	I0717 17:15:41.732744   22585 start.go:246] waiting for cluster config update ...
	I0717 17:15:41.732757   22585 start.go:255] writing updated cluster config ...
	I0717 17:15:41.733021   22585 ssh_runner.go:195] Run: rm -f paused
	I0717 17:15:41.779839   22585 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 17:15:41.782451   22585 out.go:177] * Done! kubectl is now configured to use "addons-435911" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.802270760Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721236898802244228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9278febb-0635-43e5-9015-8e6473ca9205 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.802831910Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=528c3abf-7dd6-4152-8324-63a0a1d89f83 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.802898876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=528c3abf-7dd6-4152-8324-63a0a1d89f83 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.803227010Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b0a23ffb0e78b96159f53785471db113a79268302f69933e426f918beb14167,PodSandboxId:f3df555924b34d68e8ec7f6d1678e96c200a5066cdd0717517bd08ff82f13861,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721236721496045662,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-sn68h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bd855e9-5ad2-4b53-a4b8-81a2548d80be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b206c22,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81ad57bca11afa9ff4ef8c1f48f60f8aa0a5a938b76a0107c155ef833003f82,PodSandboxId:d3a7397c62339211450604403f550b91ccc713a8b3f06df26a76033e7365def5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721236581473502391,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c68e6dcb-da12-4d99-a5b7-eb687873f149,},Annotations:map[string]string{io.kubernet
es.container.hash: df200387,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2d1134191e675f19f3922068968108787bd78c032c106a32fc420cb773502,PodSandboxId:ef6dac799f02266d26924865012567cf27959da4f507249d51ca4396c25bcfb6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721236558550805305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-znd2v,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 46cfb6c7-3a68-411b-968e-8ab21c2226ff,},Annotations:map[string]string{io.kubernetes.container.hash: d0a9f3af,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa919d6ecaffe5a059fd1f624e32a8769ad52beed2e788f61a7207d198bfdbf3,PodSandboxId:5231a839fcb4f18f8df454be7f85541f20b3020e0e5d798c1bdb219b73d7f72c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721236484815341994,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fn48r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2a4fbcb0-0e68-4190-b1fa-e95a9ae93945,},Annotations:map[string]string{io.kubernetes.container.hash: 6deb9d4b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343cf42df006c62fc492f1c30b65e3803b40602bd440e4d79e1758f66954a677,PodSandboxId:fd99af3c2f91f2c6ff39c1f834be84984049cdbe34e2f8ab393543c00b958c1c,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721236
460169056438,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-gj64l,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d75d651e-dc3f-4ea9-b380-f7637ab4ce97,},Annotations:map[string]string{io.kubernetes.container.hash: 5747c94d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881db15d7669e577c561397f470be0e4d6cff2c4e7dfae4a371fd85ddd50cada,PodSandboxId:c854a60739bf5901594c3264b49a036bc306e4d4aac406f42327194eec892deb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721236448232173244,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-blrqx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a9ebaa8e-4472-4135-822c-5fd806eb7fb6,},Annotations:map[string]string{io.kubernetes.container.hash: e39f8ab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc62acb56fc72aae8ad55516ee25f47058ffdbabc3179ce3b5922975c55be40e,PodSandboxId:c3671dfdd359cd62f93771eb79e9dc4cbf1ef3fc0f0172b5004f65065d2f9330,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e
588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721236439589292704,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qfn6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594c6a3c-368e-421e-9d3f-ceb3426c0cf7,},Annotations:map[string]string{io.kubernetes.container.hash: 94f689a6,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a721d6e9c61620875bf344ec13670996a8189bfa2f61fbb74a2396a22c8419f,PodSandboxId:8df7bee35d3e05d9bcbd945f6c85c7273811528beef3f76175a6057d51b5161e,Metadata:&ContainerMetadata{Name:storage-prov
isioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721236404696134980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055c9722-8252-48a5-9048-7fcbc3cf7a2b,},Annotations:map[string]string{io.kubernetes.container.hash: 42746b27,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3,PodSandboxId:92d703072e50ff312ea100bd9386e950decf2d6f218d27155040ffeb86309ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Imag
e:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721236402159172231,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ktksd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b98670-2ada-403b-9f7f-a712b7a3ace4,},Annotations:map[string]string{io.kubernetes.container.hash: 33e9fd0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792b
08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e,PodSandboxId:6c5b966fad82bfc4f39fd7358f96dd446e9416a76e70d16a4da10f3a887a8715,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721236399847230286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2kxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3739bf30-2198-42bf-a1c6-c53e9bbfe970,},Annotations:map[string]string{io.kubernetes.container.hash: 7216d3fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b8a95edb5a47defc155d75aa3fbf7dbdfd1b
c1ae0be4d4e830974ce2f42b49,PodSandboxId:e075b49efeab91f29141421bac3be5c5e8305e7d89716ebf3d53cd454bd4efee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721236380462267709,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074093c21d39c7941f7e4c1e5b68a75b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4a
a596d301,PodSandboxId:6d70065e627bc328c607bf5304d02f1c86f5163ef67b267615e96123eb22ec70,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721236380403800762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94f24a073ac9cce58506fe4709d9ed1,},Annotations:map[string]string{io.kubernetes.container.hash: 21f309f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0,PodSandboxId:f26b3799bdb11db73e72f6f77
4ac299128453bef930874d75bb0a3d0a1236864,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721236380336231830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0390e02e778f8620cd2833d7adc79023,},Annotations:map[string]string{io.kubernetes.container.hash: b74cd706,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f,PodSandboxId:98a01a0664d4dff8283fd820de1ab183be1f301627
56b655c3d7e5b383f2ac96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721236380315489334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef80a4a983e4af3963c62d6367bb65c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=528c3abf-7dd6-4152-8324-63a0a1d89f83 name=/runtime.v1.RuntimeService/ListCo
ntainers
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.848676083Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=316390db-3c29-4b35-865f-cba42e36d4b8 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.848890859Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=316390db-3c29-4b35-865f-cba42e36d4b8 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.849988252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6d06136-069a-43af-981f-5fe365075054 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.851198011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721236898851175100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6d06136-069a-43af-981f-5fe365075054 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.851800038Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61fbc620-0d0f-4e37-a853-e413944273f5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.851853717Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61fbc620-0d0f-4e37-a853-e413944273f5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.852153583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b0a23ffb0e78b96159f53785471db113a79268302f69933e426f918beb14167,PodSandboxId:f3df555924b34d68e8ec7f6d1678e96c200a5066cdd0717517bd08ff82f13861,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721236721496045662,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-sn68h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bd855e9-5ad2-4b53-a4b8-81a2548d80be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b206c22,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81ad57bca11afa9ff4ef8c1f48f60f8aa0a5a938b76a0107c155ef833003f82,PodSandboxId:d3a7397c62339211450604403f550b91ccc713a8b3f06df26a76033e7365def5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721236581473502391,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c68e6dcb-da12-4d99-a5b7-eb687873f149,},Annotations:map[string]string{io.kubernet
es.container.hash: df200387,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2d1134191e675f19f3922068968108787bd78c032c106a32fc420cb773502,PodSandboxId:ef6dac799f02266d26924865012567cf27959da4f507249d51ca4396c25bcfb6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721236558550805305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-znd2v,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 46cfb6c7-3a68-411b-968e-8ab21c2226ff,},Annotations:map[string]string{io.kubernetes.container.hash: d0a9f3af,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa919d6ecaffe5a059fd1f624e32a8769ad52beed2e788f61a7207d198bfdbf3,PodSandboxId:5231a839fcb4f18f8df454be7f85541f20b3020e0e5d798c1bdb219b73d7f72c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721236484815341994,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fn48r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2a4fbcb0-0e68-4190-b1fa-e95a9ae93945,},Annotations:map[string]string{io.kubernetes.container.hash: 6deb9d4b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343cf42df006c62fc492f1c30b65e3803b40602bd440e4d79e1758f66954a677,PodSandboxId:fd99af3c2f91f2c6ff39c1f834be84984049cdbe34e2f8ab393543c00b958c1c,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721236
460169056438,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-gj64l,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d75d651e-dc3f-4ea9-b380-f7637ab4ce97,},Annotations:map[string]string{io.kubernetes.container.hash: 5747c94d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881db15d7669e577c561397f470be0e4d6cff2c4e7dfae4a371fd85ddd50cada,PodSandboxId:c854a60739bf5901594c3264b49a036bc306e4d4aac406f42327194eec892deb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721236448232173244,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-blrqx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a9ebaa8e-4472-4135-822c-5fd806eb7fb6,},Annotations:map[string]string{io.kubernetes.container.hash: e39f8ab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc62acb56fc72aae8ad55516ee25f47058ffdbabc3179ce3b5922975c55be40e,PodSandboxId:c3671dfdd359cd62f93771eb79e9dc4cbf1ef3fc0f0172b5004f65065d2f9330,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e
588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721236439589292704,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qfn6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594c6a3c-368e-421e-9d3f-ceb3426c0cf7,},Annotations:map[string]string{io.kubernetes.container.hash: 94f689a6,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a721d6e9c61620875bf344ec13670996a8189bfa2f61fbb74a2396a22c8419f,PodSandboxId:8df7bee35d3e05d9bcbd945f6c85c7273811528beef3f76175a6057d51b5161e,Metadata:&ContainerMetadata{Name:storage-prov
isioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721236404696134980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055c9722-8252-48a5-9048-7fcbc3cf7a2b,},Annotations:map[string]string{io.kubernetes.container.hash: 42746b27,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3,PodSandboxId:92d703072e50ff312ea100bd9386e950decf2d6f218d27155040ffeb86309ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Imag
e:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721236402159172231,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ktksd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b98670-2ada-403b-9f7f-a712b7a3ace4,},Annotations:map[string]string{io.kubernetes.container.hash: 33e9fd0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792b
08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e,PodSandboxId:6c5b966fad82bfc4f39fd7358f96dd446e9416a76e70d16a4da10f3a887a8715,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721236399847230286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2kxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3739bf30-2198-42bf-a1c6-c53e9bbfe970,},Annotations:map[string]string{io.kubernetes.container.hash: 7216d3fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b8a95edb5a47defc155d75aa3fbf7dbdfd1b
c1ae0be4d4e830974ce2f42b49,PodSandboxId:e075b49efeab91f29141421bac3be5c5e8305e7d89716ebf3d53cd454bd4efee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721236380462267709,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074093c21d39c7941f7e4c1e5b68a75b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4a
a596d301,PodSandboxId:6d70065e627bc328c607bf5304d02f1c86f5163ef67b267615e96123eb22ec70,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721236380403800762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94f24a073ac9cce58506fe4709d9ed1,},Annotations:map[string]string{io.kubernetes.container.hash: 21f309f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0,PodSandboxId:f26b3799bdb11db73e72f6f77
4ac299128453bef930874d75bb0a3d0a1236864,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721236380336231830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0390e02e778f8620cd2833d7adc79023,},Annotations:map[string]string{io.kubernetes.container.hash: b74cd706,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f,PodSandboxId:98a01a0664d4dff8283fd820de1ab183be1f301627
56b655c3d7e5b383f2ac96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721236380315489334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef80a4a983e4af3963c62d6367bb65c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61fbc620-0d0f-4e37-a853-e413944273f5 name=/runtime.v1.RuntimeService/ListCo
ntainers
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.883318880Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=80db3173-599a-432b-9ed2-60cd5fe43b31 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.883435259Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=80db3173-599a-432b-9ed2-60cd5fe43b31 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.884629677Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21d01c72-362b-4182-a0dc-2170a5140a29 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.885767948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721236898885743146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21d01c72-362b-4182-a0dc-2170a5140a29 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.886245177Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=620c50c7-e41c-45a0-b8be-df2d7bb1dbb8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.886340128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=620c50c7-e41c-45a0-b8be-df2d7bb1dbb8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.886812472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b0a23ffb0e78b96159f53785471db113a79268302f69933e426f918beb14167,PodSandboxId:f3df555924b34d68e8ec7f6d1678e96c200a5066cdd0717517bd08ff82f13861,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721236721496045662,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-sn68h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bd855e9-5ad2-4b53-a4b8-81a2548d80be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b206c22,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81ad57bca11afa9ff4ef8c1f48f60f8aa0a5a938b76a0107c155ef833003f82,PodSandboxId:d3a7397c62339211450604403f550b91ccc713a8b3f06df26a76033e7365def5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721236581473502391,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c68e6dcb-da12-4d99-a5b7-eb687873f149,},Annotations:map[string]string{io.kubernet
es.container.hash: df200387,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2d1134191e675f19f3922068968108787bd78c032c106a32fc420cb773502,PodSandboxId:ef6dac799f02266d26924865012567cf27959da4f507249d51ca4396c25bcfb6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721236558550805305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-znd2v,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 46cfb6c7-3a68-411b-968e-8ab21c2226ff,},Annotations:map[string]string{io.kubernetes.container.hash: d0a9f3af,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa919d6ecaffe5a059fd1f624e32a8769ad52beed2e788f61a7207d198bfdbf3,PodSandboxId:5231a839fcb4f18f8df454be7f85541f20b3020e0e5d798c1bdb219b73d7f72c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721236484815341994,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fn48r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2a4fbcb0-0e68-4190-b1fa-e95a9ae93945,},Annotations:map[string]string{io.kubernetes.container.hash: 6deb9d4b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343cf42df006c62fc492f1c30b65e3803b40602bd440e4d79e1758f66954a677,PodSandboxId:fd99af3c2f91f2c6ff39c1f834be84984049cdbe34e2f8ab393543c00b958c1c,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721236
460169056438,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-gj64l,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d75d651e-dc3f-4ea9-b380-f7637ab4ce97,},Annotations:map[string]string{io.kubernetes.container.hash: 5747c94d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881db15d7669e577c561397f470be0e4d6cff2c4e7dfae4a371fd85ddd50cada,PodSandboxId:c854a60739bf5901594c3264b49a036bc306e4d4aac406f42327194eec892deb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721236448232173244,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-blrqx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a9ebaa8e-4472-4135-822c-5fd806eb7fb6,},Annotations:map[string]string{io.kubernetes.container.hash: e39f8ab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc62acb56fc72aae8ad55516ee25f47058ffdbabc3179ce3b5922975c55be40e,PodSandboxId:c3671dfdd359cd62f93771eb79e9dc4cbf1ef3fc0f0172b5004f65065d2f9330,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e
588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721236439589292704,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qfn6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594c6a3c-368e-421e-9d3f-ceb3426c0cf7,},Annotations:map[string]string{io.kubernetes.container.hash: 94f689a6,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a721d6e9c61620875bf344ec13670996a8189bfa2f61fbb74a2396a22c8419f,PodSandboxId:8df7bee35d3e05d9bcbd945f6c85c7273811528beef3f76175a6057d51b5161e,Metadata:&ContainerMetadata{Name:storage-prov
isioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721236404696134980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055c9722-8252-48a5-9048-7fcbc3cf7a2b,},Annotations:map[string]string{io.kubernetes.container.hash: 42746b27,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3,PodSandboxId:92d703072e50ff312ea100bd9386e950decf2d6f218d27155040ffeb86309ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Imag
e:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721236402159172231,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ktksd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b98670-2ada-403b-9f7f-a712b7a3ace4,},Annotations:map[string]string{io.kubernetes.container.hash: 33e9fd0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792b
08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e,PodSandboxId:6c5b966fad82bfc4f39fd7358f96dd446e9416a76e70d16a4da10f3a887a8715,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721236399847230286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2kxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3739bf30-2198-42bf-a1c6-c53e9bbfe970,},Annotations:map[string]string{io.kubernetes.container.hash: 7216d3fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b8a95edb5a47defc155d75aa3fbf7dbdfd1b
c1ae0be4d4e830974ce2f42b49,PodSandboxId:e075b49efeab91f29141421bac3be5c5e8305e7d89716ebf3d53cd454bd4efee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721236380462267709,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074093c21d39c7941f7e4c1e5b68a75b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4a
a596d301,PodSandboxId:6d70065e627bc328c607bf5304d02f1c86f5163ef67b267615e96123eb22ec70,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721236380403800762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94f24a073ac9cce58506fe4709d9ed1,},Annotations:map[string]string{io.kubernetes.container.hash: 21f309f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0,PodSandboxId:f26b3799bdb11db73e72f6f77
4ac299128453bef930874d75bb0a3d0a1236864,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721236380336231830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0390e02e778f8620cd2833d7adc79023,},Annotations:map[string]string{io.kubernetes.container.hash: b74cd706,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f,PodSandboxId:98a01a0664d4dff8283fd820de1ab183be1f301627
56b655c3d7e5b383f2ac96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721236380315489334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef80a4a983e4af3963c62d6367bb65c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=620c50c7-e41c-45a0-b8be-df2d7bb1dbb8 name=/runtime.v1.RuntimeService/ListCo
ntainers
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.923085065Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49677ccc-1dcb-4698-be47-bef8222982b8 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.923160296Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49677ccc-1dcb-4698-be47-bef8222982b8 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.924354993Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e07d2779-f1ae-43e5-b15e-f2a82da939b1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.925672160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721236898925647018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:580553,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e07d2779-f1ae-43e5-b15e-f2a82da939b1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.926188011Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9722230-7472-493a-aae4-1e1658e5c4b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.926242739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9722230-7472-493a-aae4-1e1658e5c4b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:21:38 addons-435911 crio[686]: time="2024-07-17 17:21:38.926611184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b0a23ffb0e78b96159f53785471db113a79268302f69933e426f918beb14167,PodSandboxId:f3df555924b34d68e8ec7f6d1678e96c200a5066cdd0717517bd08ff82f13861,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1721236721496045662,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-sn68h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8bd855e9-5ad2-4b53-a4b8-81a2548d80be,},Annotations:map[string]string{io.kubernetes.container.hash: 2b206c22,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a81ad57bca11afa9ff4ef8c1f48f60f8aa0a5a938b76a0107c155ef833003f82,PodSandboxId:d3a7397c62339211450604403f550b91ccc713a8b3f06df26a76033e7365def5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233,State:CONTAINER_RUNNING,CreatedAt:1721236581473502391,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c68e6dcb-da12-4d99-a5b7-eb687873f149,},Annotations:map[string]string{io.kubernet
es.container.hash: df200387,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bd2d1134191e675f19f3922068968108787bd78c032c106a32fc420cb773502,PodSandboxId:ef6dac799f02266d26924865012567cf27959da4f507249d51ca4396c25bcfb6,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53dd31cf1afe45ab3909904bbdf974dca721240681c47e172e09b8bf656db97d,State:CONTAINER_RUNNING,CreatedAt:1721236558550805305,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7867546754-znd2v,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 46cfb6c7-3a68-411b-968e-8ab21c2226ff,},Annotations:map[string]string{io.kubernetes.container.hash: d0a9f3af,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa919d6ecaffe5a059fd1f624e32a8769ad52beed2e788f61a7207d198bfdbf3,PodSandboxId:5231a839fcb4f18f8df454be7f85541f20b3020e0e5d798c1bdb219b73d7f72c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1721236484815341994,Labels:map[string]string{io.kubernetes.container.name: gcp-aut
h,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-fn48r,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 2a4fbcb0-0e68-4190-b1fa-e95a9ae93945,},Annotations:map[string]string{io.kubernetes.container.hash: 6deb9d4b,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:343cf42df006c62fc492f1c30b65e3803b40602bd440e4d79e1758f66954a677,PodSandboxId:fd99af3c2f91f2c6ff39c1f834be84984049cdbe34e2f8ab393543c00b958c1c,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7e3a3eeaf5ed4edd2279898cff978fbf31e46891773de4113e6437fa6d73fe6,State:CONTAINER_RUNNING,CreatedAt:1721236
460169056438,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-799879c74f-gj64l,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: d75d651e-dc3f-4ea9-b380-f7637ab4ce97,},Annotations:map[string]string{io.kubernetes.container.hash: 5747c94d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881db15d7669e577c561397f470be0e4d6cff2c4e7dfae4a371fd85ddd50cada,PodSandboxId:c854a60739bf5901594c3264b49a036bc306e4d4aac406f42327194eec892deb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImag
e:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1721236448232173244,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-blrqx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: a9ebaa8e-4472-4135-822c-5fd806eb7fb6,},Annotations:map[string]string{io.kubernetes.container.hash: e39f8ab5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc62acb56fc72aae8ad55516ee25f47058ffdbabc3179ce3b5922975c55be40e,PodSandboxId:c3671dfdd359cd62f93771eb79e9dc4cbf1ef3fc0f0172b5004f65065d2f9330,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e
588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1721236439589292704,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qfn6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 594c6a3c-368e-421e-9d3f-ceb3426c0cf7,},Annotations:map[string]string{io.kubernetes.container.hash: 94f689a6,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a721d6e9c61620875bf344ec13670996a8189bfa2f61fbb74a2396a22c8419f,PodSandboxId:8df7bee35d3e05d9bcbd945f6c85c7273811528beef3f76175a6057d51b5161e,Metadata:&ContainerMetadata{Name:storage-prov
isioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721236404696134980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 055c9722-8252-48a5-9048-7fcbc3cf7a2b,},Annotations:map[string]string{io.kubernetes.container.hash: 42746b27,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3,PodSandboxId:92d703072e50ff312ea100bd9386e950decf2d6f218d27155040ffeb86309ba7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Imag
e:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721236402159172231,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ktksd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b98670-2ada-403b-9f7f-a712b7a3ace4,},Annotations:map[string]string{io.kubernetes.container.hash: 33e9fd0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792b
08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e,PodSandboxId:6c5b966fad82bfc4f39fd7358f96dd446e9416a76e70d16a4da10f3a887a8715,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721236399847230286,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s2kxf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3739bf30-2198-42bf-a1c6-c53e9bbfe970,},Annotations:map[string]string{io.kubernetes.container.hash: 7216d3fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b8a95edb5a47defc155d75aa3fbf7dbdfd1b
c1ae0be4d4e830974ce2f42b49,PodSandboxId:e075b49efeab91f29141421bac3be5c5e8305e7d89716ebf3d53cd454bd4efee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721236380462267709,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074093c21d39c7941f7e4c1e5b68a75b,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4a
a596d301,PodSandboxId:6d70065e627bc328c607bf5304d02f1c86f5163ef67b267615e96123eb22ec70,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721236380403800762,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a94f24a073ac9cce58506fe4709d9ed1,},Annotations:map[string]string{io.kubernetes.container.hash: 21f309f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0,PodSandboxId:f26b3799bdb11db73e72f6f77
4ac299128453bef930874d75bb0a3d0a1236864,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721236380336231830,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0390e02e778f8620cd2833d7adc79023,},Annotations:map[string]string{io.kubernetes.container.hash: b74cd706,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f,PodSandboxId:98a01a0664d4dff8283fd820de1ab183be1f301627
56b655c3d7e5b383f2ac96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721236380315489334,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435911,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ef80a4a983e4af3963c62d6367bb65c,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9722230-7472-493a-aae4-1e1658e5c4b7 name=/runtime.v1.RuntimeService/ListCo
ntainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0b0a23ffb0e78       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   f3df555924b34       hello-world-app-6778b5fc9f-sn68h
	a81ad57bca11a       docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55                         5 minutes ago       Running             nginx                     0                   d3a7397c62339       nginx
	8bd2d1134191e       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                   5 minutes ago       Running             headlamp                  0                   ef6dac799f022       headlamp-7867546754-znd2v
	aa919d6ecaffe       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   5231a839fcb4f       gcp-auth-5db96cd9b4-fn48r
	343cf42df006c       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                         7 minutes ago       Running             yakd                      0                   fd99af3c2f91f       yakd-dashboard-799879c74f-gj64l
	881db15d7669e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   c854a60739bf5       local-path-provisioner-8d985888d-blrqx
	bc62acb56fc72       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   c3671dfdd359c       metrics-server-c59844bb4-qfn6h
	7a721d6e9c616       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   8df7bee35d3e0       storage-provisioner
	65933a91dc9ef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   92d703072e50f       coredns-7db6d8ff4d-ktksd
	e792b08ebd527       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                                        8 minutes ago       Running             kube-proxy                0                   6c5b966fad82b       kube-proxy-s2kxf
	e0b8a95edb5a4       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                                        8 minutes ago       Running             kube-scheduler            0                   e075b49efeab9       kube-scheduler-addons-435911
	8313f11cb4d95       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   6d70065e627bc       etcd-addons-435911
	fe5a18c9713d2       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                                        8 minutes ago       Running             kube-apiserver            0                   f26b3799bdb11       kube-apiserver-addons-435911
	9978a55587a89       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                                        8 minutes ago       Running             kube-controller-manager   0                   98a01a0664d4d       kube-controller-manager-addons-435911
	
	
	==> coredns [65933a91dc9ef3332b8a44fe319de436f87aedb6fe0bb62e37f9b6bc22441ae3] <==
	[INFO] 10.244.0.7:39345 - 38367 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000135831s
	[INFO] 10.244.0.7:45263 - 65221 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000236831s
	[INFO] 10.244.0.7:45263 - 62522 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095704s
	[INFO] 10.244.0.7:52371 - 15909 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000160921s
	[INFO] 10.244.0.7:52371 - 14631 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000176843s
	[INFO] 10.244.0.7:39423 - 48435 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000189602s
	[INFO] 10.244.0.7:39423 - 32050 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000102393s
	[INFO] 10.244.0.7:47882 - 5362 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000098795s
	[INFO] 10.244.0.7:47882 - 20977 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000092583s
	[INFO] 10.244.0.7:60178 - 20395 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082831s
	[INFO] 10.244.0.7:60178 - 30121 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073424s
	[INFO] 10.244.0.7:52057 - 51165 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000155933s
	[INFO] 10.244.0.7:52057 - 58591 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000235363s
	[INFO] 10.244.0.7:48080 - 56348 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000073001s
	[INFO] 10.244.0.7:48080 - 58626 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000045221s
	[INFO] 10.244.0.22:36571 - 11224 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000538242s
	[INFO] 10.244.0.22:59498 - 37390 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000627304s
	[INFO] 10.244.0.22:51128 - 8813 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000080998s
	[INFO] 10.244.0.22:38099 - 38543 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009486s
	[INFO] 10.244.0.22:52087 - 11175 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007915s
	[INFO] 10.244.0.22:60207 - 13637 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105714s
	[INFO] 10.244.0.22:32927 - 62204 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000478535s
	[INFO] 10.244.0.22:48017 - 11200 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000449084s
	[INFO] 10.244.0.26:43330 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000427723s
	[INFO] 10.244.0.26:35060 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000133427s
	
	
	==> describe nodes <==
	Name:               addons-435911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-435911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=addons-435911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T17_13_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-435911
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:13:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-435911
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:21:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:19:14 +0000   Wed, 17 Jul 2024 17:13:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:19:14 +0000   Wed, 17 Jul 2024 17:13:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:19:14 +0000   Wed, 17 Jul 2024 17:13:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:19:14 +0000   Wed, 17 Jul 2024 17:13:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    addons-435911
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d28c18b66294996a96261bd0a3a851e
	  System UUID:                4d28c18b-6629-4996-a962-61bd0a3a851e
	  Boot ID:                    3c05feed-3801-4256-af02-cf50ab398763
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-6778b5fc9f-sn68h          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  gcp-auth                    gcp-auth-5db96cd9b4-fn48r                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  headlamp                    headlamp-7867546754-znd2v                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 coredns-7db6d8ff4d-ktksd                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m20s
	  kube-system                 etcd-addons-435911                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m35s
	  kube-system                 kube-apiserver-addons-435911              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	  kube-system                 kube-controller-manager-addons-435911     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m34s
	  kube-system                 kube-proxy-s2kxf                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-scheduler-addons-435911              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m35s
	  kube-system                 metrics-server-c59844bb4-qfn6h            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         8m14s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  local-path-storage          local-path-provisioner-8d985888d-blrqx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  yakd-dashboard              yakd-dashboard-799879c74f-gj64l           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     8m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m18s                  kube-proxy       
	  Normal  Starting                 8m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m40s (x8 over 8m40s)  kubelet          Node addons-435911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m40s (x8 over 8m40s)  kubelet          Node addons-435911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m40s (x7 over 8m40s)  kubelet          Node addons-435911 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m34s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m34s                  kubelet          Node addons-435911 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s                  kubelet          Node addons-435911 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s                  kubelet          Node addons-435911 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m33s                  kubelet          Node addons-435911 status is now: NodeReady
	  Normal  RegisteredNode           8m21s                  node-controller  Node addons-435911 event: Registered Node addons-435911 in Controller
	
	
	==> dmesg <==
	[  +5.084869] kauditd_printk_skb: 121 callbacks suppressed
	[  +5.011981] kauditd_printk_skb: 131 callbacks suppressed
	[  +5.063076] kauditd_printk_skb: 70 callbacks suppressed
	[ +22.001873] kauditd_printk_skb: 4 callbacks suppressed
	[Jul17 17:14] kauditd_printk_skb: 6 callbacks suppressed
	[  +9.021474] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.639850] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.499565] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.863031] kauditd_printk_skb: 86 callbacks suppressed
	[  +6.315952] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.970508] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.414718] kauditd_printk_skb: 36 callbacks suppressed
	[Jul17 17:15] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.677046] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.986469] kauditd_printk_skb: 22 callbacks suppressed
	[Jul17 17:16] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.565904] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.382067] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.213190] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.048417] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.406326] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.191149] kauditd_printk_skb: 19 callbacks suppressed
	[  +8.393607] kauditd_printk_skb: 33 callbacks suppressed
	[Jul17 17:18] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.296977] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [8313f11cb4d9581b92f9c4a26572a0d323c45c35eea29dc40a475e4aa596d301] <==
	{"level":"info","ts":"2024-07-17T17:14:27.404515Z","caller":"traceutil/trace.go:171","msg":"trace[1863744211] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1064; }","duration":"276.218192ms","start":"2024-07-17T17:14:27.12829Z","end":"2024-07-17T17:14:27.404508Z","steps":["trace[1863744211] 'agreement among raft nodes before linearized reading'  (duration: 276.031687ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:14:27.404529Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.158147ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14299"}
	{"level":"info","ts":"2024-07-17T17:14:27.404552Z","caller":"traceutil/trace.go:171","msg":"trace[1162972558] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1064; }","duration":"127.208048ms","start":"2024-07-17T17:14:27.277338Z","end":"2024-07-17T17:14:27.404546Z","steps":["trace[1162972558] 'agreement among raft nodes before linearized reading'  (duration: 127.129644ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T17:15:50.08007Z","caller":"traceutil/trace.go:171","msg":"trace[557754540] linearizableReadLoop","detail":"{readStateIndex:1439; appliedIndex:1439; }","duration":"274.971404ms","start":"2024-07-17T17:15:49.805089Z","end":"2024-07-17T17:15:50.08006Z","steps":["trace[557754540] 'read index received'  (duration: 274.96477ms)","trace[557754540] 'applied index is now lower than readState.Index'  (duration: 5.665µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T17:15:50.080026Z","caller":"traceutil/trace.go:171","msg":"trace[1796510116] transaction","detail":"{read_only:false; response_revision:1390; number_of_response:1; }","duration":"335.461276ms","start":"2024-07-17T17:15:49.744523Z","end":"2024-07-17T17:15:50.079984Z","steps":["trace[1796510116] 'process raft request'  (duration: 335.317541ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:15:50.080434Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:15:49.744504Z","time spent":"335.759054ms","remote":"127.0.0.1:49660","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2010,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/default/task-pv-pod\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/task-pv-pod\" value_size:1968 >> failure:<>"}
	{"level":"warn","ts":"2024-07-17T17:15:50.080576Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.46969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:84094"}
	{"level":"info","ts":"2024-07-17T17:15:50.080614Z","caller":"traceutil/trace.go:171","msg":"trace[631614517] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1390; }","duration":"275.546656ms","start":"2024-07-17T17:15:49.805056Z","end":"2024-07-17T17:15:50.080603Z","steps":["trace[631614517] 'agreement among raft nodes before linearized reading'  (duration: 275.191071ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:15:50.083584Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.120987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-17T17:15:50.083635Z","caller":"traceutil/trace.go:171","msg":"trace[895294486] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1390; }","duration":"189.24201ms","start":"2024-07-17T17:15:49.894384Z","end":"2024-07-17T17:15:50.083626Z","steps":["trace[895294486] 'agreement among raft nodes before linearized reading'  (duration: 189.119811ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T17:16:04.887658Z","caller":"traceutil/trace.go:171","msg":"trace[795762626] transaction","detail":"{read_only:false; response_revision:1447; number_of_response:1; }","duration":"393.674877ms","start":"2024-07-17T17:16:04.49363Z","end":"2024-07-17T17:16:04.887305Z","steps":["trace[795762626] 'process raft request'  (duration: 393.345694ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:16:04.887877Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:16:04.493615Z","time spent":"394.159797ms","remote":"127.0.0.1:49660","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4006,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/tiller-deploy-6677d64bcd-4vwq8\" mod_revision:1439 > success:<request_put:<key:\"/registry/pods/kube-system/tiller-deploy-6677d64bcd-4vwq8\" value_size:3941 >> failure:<request_range:<key:\"/registry/pods/kube-system/tiller-deploy-6677d64bcd-4vwq8\" > >"}
	{"level":"info","ts":"2024-07-17T17:16:04.888943Z","caller":"traceutil/trace.go:171","msg":"trace[1999490122] linearizableReadLoop","detail":"{readStateIndex:1500; appliedIndex:1499; }","duration":"273.530991ms","start":"2024-07-17T17:16:04.615383Z","end":"2024-07-17T17:16:04.888914Z","steps":["trace[1999490122] 'read index received'  (duration: 271.536182ms)","trace[1999490122] 'applied index is now lower than readState.Index'  (duration: 1.993603ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T17:16:04.889154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.760712ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.27\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-07-17T17:16:04.891333Z","caller":"traceutil/trace.go:171","msg":"trace[443358345] range","detail":"{range_begin:/registry/masterleases/192.168.39.27; range_end:; response_count:1; response_revision:1447; }","duration":"273.815828ms","start":"2024-07-17T17:16:04.61536Z","end":"2024-07-17T17:16:04.889176Z","steps":["trace[443358345] 'agreement among raft nodes before linearized reading'  (duration: 273.720959ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:16:04.897676Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.653076ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:4020"}
	{"level":"info","ts":"2024-07-17T17:16:04.897713Z","caller":"traceutil/trace.go:171","msg":"trace[1028145181] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1447; }","duration":"258.713079ms","start":"2024-07-17T17:16:04.638991Z","end":"2024-07-17T17:16:04.897704Z","steps":["trace[1028145181] 'agreement among raft nodes before linearized reading'  (duration: 258.611801ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:16:04.897834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.188664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-17T17:16:04.897879Z","caller":"traceutil/trace.go:171","msg":"trace[2003933802] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:1447; }","duration":"176.256334ms","start":"2024-07-17T17:16:04.72161Z","end":"2024-07-17T17:16:04.897866Z","steps":["trace[2003933802] 'agreement among raft nodes before linearized reading'  (duration: 176.195225ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T17:16:08.486464Z","caller":"traceutil/trace.go:171","msg":"trace[117054538] linearizableReadLoop","detail":"{readStateIndex:1509; appliedIndex:1508; }","duration":"300.202882ms","start":"2024-07-17T17:16:08.186247Z","end":"2024-07-17T17:16:08.48645Z","steps":["trace[117054538] 'read index received'  (duration: 300.017849ms)","trace[117054538] 'applied index is now lower than readState.Index'  (duration: 184.28µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T17:16:08.486613Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.349937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-17T17:16:08.486649Z","caller":"traceutil/trace.go:171","msg":"trace[345364053] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1455; }","duration":"300.383676ms","start":"2024-07-17T17:16:08.186243Z","end":"2024-07-17T17:16:08.486627Z","steps":["trace[345364053] 'agreement among raft nodes before linearized reading'  (duration: 300.291247ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:16:08.486672Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:16:08.186211Z","time spent":"300.455473ms","remote":"127.0.0.1:49654","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1135,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-07-17T17:16:08.486677Z","caller":"traceutil/trace.go:171","msg":"trace[1885716528] transaction","detail":"{read_only:false; response_revision:1455; number_of_response:1; }","duration":"304.125787ms","start":"2024-07-17T17:16:08.182539Z","end":"2024-07-17T17:16:08.486665Z","steps":["trace[1885716528] 'process raft request'  (duration: 303.757767ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:16:08.486756Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:16:08.182524Z","time spent":"304.188888ms","remote":"127.0.0.1:49748","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-tpbmdt7r7mmwyvrtzhzzmqx3iq\" mod_revision:1410 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-tpbmdt7r7mmwyvrtzhzzmqx3iq\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-tpbmdt7r7mmwyvrtzhzzmqx3iq\" > >"}
	
	
	==> gcp-auth [aa919d6ecaffe5a059fd1f624e32a8769ad52beed2e788f61a7207d198bfdbf3] <==
	2024/07/17 17:14:44 GCP Auth Webhook started!
	2024/07/17 17:15:42 Ready to marshal response ...
	2024/07/17 17:15:42 Ready to write response ...
	2024/07/17 17:15:42 Ready to marshal response ...
	2024/07/17 17:15:42 Ready to write response ...
	2024/07/17 17:15:42 Ready to marshal response ...
	2024/07/17 17:15:42 Ready to write response ...
	2024/07/17 17:15:47 Ready to marshal response ...
	2024/07/17 17:15:47 Ready to write response ...
	2024/07/17 17:15:49 Ready to marshal response ...
	2024/07/17 17:15:49 Ready to write response ...
	2024/07/17 17:15:53 Ready to marshal response ...
	2024/07/17 17:15:53 Ready to write response ...
	2024/07/17 17:16:14 Ready to marshal response ...
	2024/07/17 17:16:14 Ready to write response ...
	2024/07/17 17:16:14 Ready to marshal response ...
	2024/07/17 17:16:14 Ready to write response ...
	2024/07/17 17:16:15 Ready to marshal response ...
	2024/07/17 17:16:15 Ready to write response ...
	2024/07/17 17:16:26 Ready to marshal response ...
	2024/07/17 17:16:26 Ready to write response ...
	2024/07/17 17:16:32 Ready to marshal response ...
	2024/07/17 17:16:32 Ready to write response ...
	2024/07/17 17:18:38 Ready to marshal response ...
	2024/07/17 17:18:38 Ready to write response ...
	
	
	==> kernel <==
	 17:21:39 up 9 min,  0 users,  load average: 0.03, 0.61, 0.51
	Linux addons-435911 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [fe5a18c9713d21755550de03fc5f4144e1fbe17961c2b4edbeef1640383974d0] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0717 17:15:05.542203       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.195.116:443/apis/metrics.k8s.io/v1beta1: Get "https://10.99.195.116:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.99.195.116:443: connect: connection refused
	E0717 17:15:05.582718       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0717 17:15:05.591327       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 17:15:42.568636       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.127.18"}
	I0717 17:16:09.432205       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0717 17:16:10.521278       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0717 17:16:15.223780       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0717 17:16:15.419671       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.253.146"}
	I0717 17:16:16.151558       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0717 17:16:42.845311       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I0717 17:16:49.296190       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 17:16:49.296241       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 17:16:49.323339       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 17:16:49.323391       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 17:16:49.332221       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 17:16:49.332274       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 17:16:49.357731       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 17:16:49.357791       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 17:16:49.405202       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 17:16:49.405312       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 17:16:50.332975       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 17:16:50.406100       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 17:16:50.440035       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0717 17:18:38.756039       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.48.36"}
	
	
	==> kube-controller-manager [9978a55587a895e12fb0d591b73c90758af5fdac4042f39a1d1c5dac70ecf06f] <==
	W0717 17:19:20.302555       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:19:20.302671       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:19:40.829953       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:19:40.830088       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:19:41.237771       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:19:41.237831       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:19:56.063944       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:19:56.064000       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:19:57.199247       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:19:57.199316       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:20:12.931282       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:20:12.931336       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:20:28.202952       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:20:28.203113       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:20:47.169173       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:20:47.169265       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:20:48.983199       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:20:48.983250       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:21:02.558229       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:21:02.558285       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:21:11.682224       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:21:11.682308       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 17:21:27.665099       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 17:21:27.665241       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 17:21:37.923576       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="9.029µs"
	
	
	==> kube-proxy [e792b08ebd5279dc51421031deb493eaee09e20943f593290a4758e30231b64e] <==
	I0717 17:13:20.610171       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:13:20.623956       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.27"]
	I0717 17:13:20.684209       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:13:20.684261       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:13:20.684288       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:13:20.688312       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:13:20.688587       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:13:20.688608       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:13:20.690165       1 config.go:192] "Starting service config controller"
	I0717 17:13:20.690188       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:13:20.690217       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:13:20.690222       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:13:20.690756       1 config.go:319] "Starting node config controller"
	I0717 17:13:20.690762       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:13:20.790480       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:13:20.790535       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:13:20.790793       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e0b8a95edb5a47defc155d75aa3fbf7dbdfd1bc1ae0be4d4e830974ce2f42b49] <==
	E0717 17:13:03.003445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 17:13:03.003488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 17:13:03.003492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:13:03.003546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:13:03.003613       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 17:13:03.003745       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 17:13:03.003527       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 17:13:03.003822       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 17:13:03.819127       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:13:03.819158       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:13:03.951492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 17:13:03.951546       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 17:13:04.004089       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:13:04.004129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:13:04.149079       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 17:13:04.149119       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 17:13:04.174358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 17:13:04.174457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 17:13:04.186369       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 17:13:04.186445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 17:13:04.254490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 17:13:04.254528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 17:13:04.339543       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:13:04.339653       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 17:13:06.799828       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 17:18:45 addons-435911 kubelet[1283]: I0717 17:18:45.293960    1283 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6d9e1dd-adba-422c-985c-253dffb73fa0" path="/var/lib/kubelet/pods/f6d9e1dd-adba-422c-985c-253dffb73fa0/volumes"
	Jul 17 17:19:05 addons-435911 kubelet[1283]: E0717 17:19:05.297552    1283 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:19:05 addons-435911 kubelet[1283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:19:05 addons-435911 kubelet[1283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:19:05 addons-435911 kubelet[1283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:19:05 addons-435911 kubelet[1283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:19:06 addons-435911 kubelet[1283]: I0717 17:19:06.401661    1283 scope.go:117] "RemoveContainer" containerID="48b8492e809b2439fd7a5347d6a340978a4c1da6c72a97bdb76641bd2b13b3ed"
	Jul 17 17:19:06 addons-435911 kubelet[1283]: I0717 17:19:06.416753    1283 scope.go:117] "RemoveContainer" containerID="1eb21978ad8eede94160b2e7ea3617aa15fea3499577c353e5b80a2c3bab42f9"
	Jul 17 17:20:05 addons-435911 kubelet[1283]: E0717 17:20:05.297306    1283 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:20:05 addons-435911 kubelet[1283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:20:05 addons-435911 kubelet[1283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:20:05 addons-435911 kubelet[1283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:20:05 addons-435911 kubelet[1283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:21:05 addons-435911 kubelet[1283]: E0717 17:21:05.297928    1283 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:21:05 addons-435911 kubelet[1283]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:21:05 addons-435911 kubelet[1283]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:21:05 addons-435911 kubelet[1283]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:21:05 addons-435911 kubelet[1283]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:21:37 addons-435911 kubelet[1283]: I0717 17:21:37.950840    1283 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-sn68h" podStartSLOduration=177.660104681 podStartE2EDuration="2m59.950796224s" podCreationTimestamp="2024-07-17 17:18:38 +0000 UTC" firstStartedPulling="2024-07-17 17:18:39.19325641 +0000 UTC m=+334.025107043" lastFinishedPulling="2024-07-17 17:18:41.483947947 +0000 UTC m=+336.315798586" observedRunningTime="2024-07-17 17:18:42.401670511 +0000 UTC m=+337.233521164" watchObservedRunningTime="2024-07-17 17:21:37.950796224 +0000 UTC m=+512.782646874"
	Jul 17 17:21:39 addons-435911 kubelet[1283]: I0717 17:21:39.372764    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/594c6a3c-368e-421e-9d3f-ceb3426c0cf7-tmp-dir\") pod \"594c6a3c-368e-421e-9d3f-ceb3426c0cf7\" (UID: \"594c6a3c-368e-421e-9d3f-ceb3426c0cf7\") "
	Jul 17 17:21:39 addons-435911 kubelet[1283]: I0717 17:21:39.372832    1283 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94w9f\" (UniqueName: \"kubernetes.io/projected/594c6a3c-368e-421e-9d3f-ceb3426c0cf7-kube-api-access-94w9f\") pod \"594c6a3c-368e-421e-9d3f-ceb3426c0cf7\" (UID: \"594c6a3c-368e-421e-9d3f-ceb3426c0cf7\") "
	Jul 17 17:21:39 addons-435911 kubelet[1283]: I0717 17:21:39.373481    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/594c6a3c-368e-421e-9d3f-ceb3426c0cf7-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "594c6a3c-368e-421e-9d3f-ceb3426c0cf7" (UID: "594c6a3c-368e-421e-9d3f-ceb3426c0cf7"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 17 17:21:39 addons-435911 kubelet[1283]: I0717 17:21:39.375958    1283 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594c6a3c-368e-421e-9d3f-ceb3426c0cf7-kube-api-access-94w9f" (OuterVolumeSpecName: "kube-api-access-94w9f") pod "594c6a3c-368e-421e-9d3f-ceb3426c0cf7" (UID: "594c6a3c-368e-421e-9d3f-ceb3426c0cf7"). InnerVolumeSpecName "kube-api-access-94w9f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 17:21:39 addons-435911 kubelet[1283]: I0717 17:21:39.473578    1283 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-94w9f\" (UniqueName: \"kubernetes.io/projected/594c6a3c-368e-421e-9d3f-ceb3426c0cf7-kube-api-access-94w9f\") on node \"addons-435911\" DevicePath \"\""
	Jul 17 17:21:39 addons-435911 kubelet[1283]: I0717 17:21:39.473608    1283 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/594c6a3c-368e-421e-9d3f-ceb3426c0cf7-tmp-dir\") on node \"addons-435911\" DevicePath \"\""
	
	
	==> storage-provisioner [7a721d6e9c61620875bf344ec13670996a8189bfa2f61fbb74a2396a22c8419f] <==
	I0717 17:13:25.484253       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 17:13:25.736832       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 17:13:25.740984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 17:13:25.808013       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 17:13:25.808146       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-435911_764cb43b-36c6-4c15-abfd-05fbe4f1b787!
	I0717 17:13:25.812035       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e2bc7bc-4a41-4c47-829d-1aeba2a7bb49", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-435911_764cb43b-36c6-4c15-abfd-05fbe4f1b787 became leader
	I0717 17:13:26.009864       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-435911_764cb43b-36c6-4c15-abfd-05fbe4f1b787!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-435911 -n addons-435911
helpers_test.go:261: (dbg) Run:  kubectl --context addons-435911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-c59844bb4-qfn6h
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-435911 describe pod metrics-server-c59844bb4-qfn6h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-435911 describe pod metrics-server-c59844bb4-qfn6h: exit status 1 (69.322511ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-c59844bb4-qfn6h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-435911 describe pod metrics-server-c59844bb4-qfn6h: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (335.27s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-435911
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-435911: exit status 82 (2m0.44724977s)

                                                
                                                
-- stdout --
	* Stopping node "addons-435911"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-435911" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-435911
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-435911: exit status 11 (21.639044959s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-435911" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-435911
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-435911: exit status 11 (6.143689549s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-435911" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-435911
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-435911: exit status 11 (6.143177049s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-435911" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 node stop m02 -v=7 --alsologtostderr
E0717 17:35:41.791312   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:36:05.240126   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174628 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.47612861s)

                                                
                                                
-- stdout --
	* Stopping node "ha-174628-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:34:56.201485   37025 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:34:56.201738   37025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:34:56.201749   37025 out.go:304] Setting ErrFile to fd 2...
	I0717 17:34:56.201755   37025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:34:56.202019   37025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:34:56.202314   37025 mustload.go:65] Loading cluster: ha-174628
	I0717 17:34:56.202714   37025 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:34:56.202732   37025 stop.go:39] StopHost: ha-174628-m02
	I0717 17:34:56.203110   37025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:34:56.203161   37025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:34:56.220154   37025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43625
	I0717 17:34:56.220625   37025 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:34:56.221180   37025 main.go:141] libmachine: Using API Version  1
	I0717 17:34:56.221201   37025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:34:56.221544   37025 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:34:56.223342   37025 out.go:177] * Stopping node "ha-174628-m02"  ...
	I0717 17:34:56.224830   37025 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 17:34:56.224854   37025 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:34:56.225060   37025 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 17:34:56.225087   37025 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:34:56.227742   37025 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:34:56.228146   37025 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:34:56.228185   37025 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:34:56.228292   37025 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:34:56.228445   37025 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:34:56.228611   37025 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:34:56.228745   37025 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	I0717 17:34:56.318994   37025 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 17:34:56.370699   37025 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 17:34:56.425884   37025 main.go:141] libmachine: Stopping "ha-174628-m02"...
	I0717 17:34:56.425918   37025 main.go:141] libmachine: (ha-174628-m02) Calling .GetState
	I0717 17:34:56.427569   37025 main.go:141] libmachine: (ha-174628-m02) Calling .Stop
	I0717 17:34:56.430843   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 0/120
	I0717 17:34:57.432227   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 1/120
	I0717 17:34:58.433654   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 2/120
	I0717 17:34:59.434922   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 3/120
	I0717 17:35:00.436210   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 4/120
	I0717 17:35:01.438418   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 5/120
	I0717 17:35:02.439964   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 6/120
	I0717 17:35:03.441337   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 7/120
	I0717 17:35:04.443486   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 8/120
	I0717 17:35:05.444775   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 9/120
	I0717 17:35:06.446254   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 10/120
	I0717 17:35:07.448533   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 11/120
	I0717 17:35:08.449807   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 12/120
	I0717 17:35:09.451990   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 13/120
	I0717 17:35:10.453447   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 14/120
	I0717 17:35:11.455729   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 15/120
	I0717 17:35:12.457154   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 16/120
	I0717 17:35:13.459371   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 17/120
	I0717 17:35:14.460709   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 18/120
	I0717 17:35:15.462229   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 19/120
	I0717 17:35:16.464360   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 20/120
	I0717 17:35:17.466054   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 21/120
	I0717 17:35:18.468381   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 22/120
	I0717 17:35:19.470220   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 23/120
	I0717 17:35:20.471684   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 24/120
	I0717 17:35:21.473160   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 25/120
	I0717 17:35:22.475325   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 26/120
	I0717 17:35:23.476992   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 27/120
	I0717 17:35:24.478223   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 28/120
	I0717 17:35:25.479703   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 29/120
	I0717 17:35:26.481352   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 30/120
	I0717 17:35:27.483649   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 31/120
	I0717 17:35:28.485450   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 32/120
	I0717 17:35:29.487452   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 33/120
	I0717 17:35:30.488810   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 34/120
	I0717 17:35:31.490119   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 35/120
	I0717 17:35:32.491758   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 36/120
	I0717 17:35:33.494133   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 37/120
	I0717 17:35:34.495346   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 38/120
	I0717 17:35:35.497662   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 39/120
	I0717 17:35:36.499823   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 40/120
	I0717 17:35:37.501455   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 41/120
	I0717 17:35:38.502692   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 42/120
	I0717 17:35:39.504309   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 43/120
	I0717 17:35:40.505767   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 44/120
	I0717 17:35:41.507714   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 45/120
	I0717 17:35:42.509323   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 46/120
	I0717 17:35:43.511327   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 47/120
	I0717 17:35:44.513775   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 48/120
	I0717 17:35:45.514968   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 49/120
	I0717 17:35:46.517531   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 50/120
	I0717 17:35:47.519002   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 51/120
	I0717 17:35:48.520407   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 52/120
	I0717 17:35:49.521558   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 53/120
	I0717 17:35:50.523547   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 54/120
	I0717 17:35:51.525112   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 55/120
	I0717 17:35:52.527601   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 56/120
	I0717 17:35:53.529803   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 57/120
	I0717 17:35:54.531253   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 58/120
	I0717 17:35:55.532591   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 59/120
	I0717 17:35:56.534173   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 60/120
	I0717 17:35:57.535662   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 61/120
	I0717 17:35:58.537217   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 62/120
	I0717 17:35:59.539457   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 63/120
	I0717 17:36:00.541542   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 64/120
	I0717 17:36:01.543466   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 65/120
	I0717 17:36:02.545341   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 66/120
	I0717 17:36:03.547958   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 67/120
	I0717 17:36:04.550105   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 68/120
	I0717 17:36:05.552413   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 69/120
	I0717 17:36:06.554392   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 70/120
	I0717 17:36:07.556637   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 71/120
	I0717 17:36:08.558567   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 72/120
	I0717 17:36:09.559855   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 73/120
	I0717 17:36:10.561726   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 74/120
	I0717 17:36:11.563728   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 75/120
	I0717 17:36:12.565113   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 76/120
	I0717 17:36:13.566519   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 77/120
	I0717 17:36:14.568895   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 78/120
	I0717 17:36:15.570073   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 79/120
	I0717 17:36:16.572260   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 80/120
	I0717 17:36:17.573633   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 81/120
	I0717 17:36:18.575695   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 82/120
	I0717 17:36:19.577219   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 83/120
	I0717 17:36:20.579552   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 84/120
	I0717 17:36:21.581619   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 85/120
	I0717 17:36:22.583610   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 86/120
	I0717 17:36:23.584987   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 87/120
	I0717 17:36:24.586763   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 88/120
	I0717 17:36:25.588173   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 89/120
	I0717 17:36:26.590017   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 90/120
	I0717 17:36:27.591425   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 91/120
	I0717 17:36:28.592676   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 92/120
	I0717 17:36:29.593929   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 93/120
	I0717 17:36:30.596146   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 94/120
	I0717 17:36:31.598036   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 95/120
	I0717 17:36:32.599442   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 96/120
	I0717 17:36:33.600971   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 97/120
	I0717 17:36:34.602180   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 98/120
	I0717 17:36:35.603471   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 99/120
	I0717 17:36:36.605495   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 100/120
	I0717 17:36:37.607369   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 101/120
	I0717 17:36:38.609173   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 102/120
	I0717 17:36:39.610446   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 103/120
	I0717 17:36:40.612093   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 104/120
	I0717 17:36:41.614040   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 105/120
	I0717 17:36:42.615300   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 106/120
	I0717 17:36:43.616721   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 107/120
	I0717 17:36:44.618197   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 108/120
	I0717 17:36:45.619678   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 109/120
	I0717 17:36:46.621660   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 110/120
	I0717 17:36:47.623479   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 111/120
	I0717 17:36:48.624767   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 112/120
	I0717 17:36:49.626698   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 113/120
	I0717 17:36:50.628624   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 114/120
	I0717 17:36:51.629877   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 115/120
	I0717 17:36:52.631171   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 116/120
	I0717 17:36:53.632599   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 117/120
	I0717 17:36:54.634268   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 118/120
	I0717 17:36:55.635457   37025 main.go:141] libmachine: (ha-174628-m02) Waiting for machine to stop 119/120
	I0717 17:36:56.635906   37025 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 17:36:56.636029   37025 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-174628 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr: exit status 3 (19.209527449s)

                                                
                                                
-- stdout --
	ha-174628
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174628-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:36:56.679688   37465 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:36:56.679923   37465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:36:56.679931   37465 out.go:304] Setting ErrFile to fd 2...
	I0717 17:36:56.679936   37465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:36:56.680106   37465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:36:56.680251   37465 out.go:298] Setting JSON to false
	I0717 17:36:56.680275   37465 mustload.go:65] Loading cluster: ha-174628
	I0717 17:36:56.680387   37465 notify.go:220] Checking for updates...
	I0717 17:36:56.680635   37465 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:36:56.680647   37465 status.go:255] checking status of ha-174628 ...
	I0717 17:36:56.681040   37465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:36:56.681092   37465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:36:56.695578   37465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44125
	I0717 17:36:56.695993   37465 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:36:56.696479   37465 main.go:141] libmachine: Using API Version  1
	I0717 17:36:56.696499   37465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:36:56.696843   37465 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:36:56.697069   37465 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:36:56.698761   37465 status.go:330] ha-174628 host status = "Running" (err=<nil>)
	I0717 17:36:56.698774   37465 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:36:56.699093   37465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:36:56.699136   37465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:36:56.713074   37465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40565
	I0717 17:36:56.713465   37465 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:36:56.713872   37465 main.go:141] libmachine: Using API Version  1
	I0717 17:36:56.713890   37465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:36:56.714214   37465 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:36:56.714380   37465 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:36:56.717411   37465 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:36:56.717947   37465 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:36:56.717972   37465 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:36:56.718101   37465 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:36:56.718405   37465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:36:56.718441   37465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:36:56.732372   37465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0717 17:36:56.732813   37465 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:36:56.733346   37465 main.go:141] libmachine: Using API Version  1
	I0717 17:36:56.733367   37465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:36:56.733705   37465 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:36:56.733899   37465 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:36:56.734099   37465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:36:56.734124   37465 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:36:56.736735   37465 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:36:56.737197   37465 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:36:56.737221   37465 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:36:56.737354   37465 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:36:56.737517   37465 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:36:56.737667   37465 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:36:56.737801   37465 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:36:56.817606   37465 ssh_runner.go:195] Run: systemctl --version
	I0717 17:36:56.824291   37465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:36:56.843727   37465 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:36:56.843761   37465 api_server.go:166] Checking apiserver status ...
	I0717 17:36:56.843798   37465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:36:56.861819   37465 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup
	W0717 17:36:56.871320   37465 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:36:56.871367   37465 ssh_runner.go:195] Run: ls
	I0717 17:36:56.875367   37465 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:36:56.879789   37465 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:36:56.879814   37465 status.go:422] ha-174628 apiserver status = Running (err=<nil>)
	I0717 17:36:56.879826   37465 status.go:257] ha-174628 status: &{Name:ha-174628 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:36:56.879849   37465 status.go:255] checking status of ha-174628-m02 ...
	I0717 17:36:56.880138   37465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:36:56.880168   37465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:36:56.894418   37465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0717 17:36:56.894865   37465 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:36:56.895331   37465 main.go:141] libmachine: Using API Version  1
	I0717 17:36:56.895344   37465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:36:56.895589   37465 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:36:56.895738   37465 main.go:141] libmachine: (ha-174628-m02) Calling .GetState
	I0717 17:36:56.897135   37465 status.go:330] ha-174628-m02 host status = "Running" (err=<nil>)
	I0717 17:36:56.897150   37465 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:36:56.897431   37465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:36:56.897457   37465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:36:56.911009   37465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44839
	I0717 17:36:56.911401   37465 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:36:56.911885   37465 main.go:141] libmachine: Using API Version  1
	I0717 17:36:56.911903   37465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:36:56.912178   37465 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:36:56.912339   37465 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:36:56.914586   37465 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:36:56.914963   37465 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:36:56.914986   37465 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:36:56.915128   37465 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:36:56.915514   37465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:36:56.915551   37465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:36:56.930700   37465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41123
	I0717 17:36:56.931193   37465 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:36:56.931778   37465 main.go:141] libmachine: Using API Version  1
	I0717 17:36:56.931804   37465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:36:56.932154   37465 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:36:56.932337   37465 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:36:56.932553   37465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:36:56.932578   37465 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:36:56.935347   37465 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:36:56.935745   37465 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:36:56.935789   37465 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:36:56.935980   37465 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:36:56.936145   37465 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:36:56.936309   37465 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:36:56.936456   37465 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	W0717 17:37:15.489164   37465 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.97:22: connect: no route to host
	W0717 17:37:15.489285   37465 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	E0717 17:37:15.489309   37465 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:15.489322   37465 status.go:257] ha-174628-m02 status: &{Name:ha-174628-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 17:37:15.489348   37465 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:15.489356   37465 status.go:255] checking status of ha-174628-m03 ...
	I0717 17:37:15.489752   37465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:15.489798   37465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:15.504263   37465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I0717 17:37:15.504688   37465 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:15.505163   37465 main.go:141] libmachine: Using API Version  1
	I0717 17:37:15.505181   37465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:15.505503   37465 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:15.505683   37465 main.go:141] libmachine: (ha-174628-m03) Calling .GetState
	I0717 17:37:15.507378   37465 status.go:330] ha-174628-m03 host status = "Running" (err=<nil>)
	I0717 17:37:15.507397   37465 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:15.507815   37465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:15.507857   37465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:15.521865   37465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36673
	I0717 17:37:15.522275   37465 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:15.522685   37465 main.go:141] libmachine: Using API Version  1
	I0717 17:37:15.522707   37465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:15.523006   37465 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:15.523157   37465 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:37:15.525787   37465 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:15.526214   37465 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:15.526238   37465 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:15.526423   37465 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:15.526713   37465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:15.526744   37465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:15.540624   37465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35205
	I0717 17:37:15.541088   37465 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:15.541558   37465 main.go:141] libmachine: Using API Version  1
	I0717 17:37:15.541588   37465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:15.541868   37465 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:15.542071   37465 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:37:15.542262   37465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:15.542289   37465 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:37:15.545072   37465 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:15.545496   37465 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:15.545517   37465 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:15.545676   37465 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:37:15.545852   37465 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:37:15.546003   37465 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:37:15.546144   37465 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:37:15.631080   37465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:15.649283   37465 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:37:15.649340   37465 api_server.go:166] Checking apiserver status ...
	I0717 17:37:15.649378   37465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:37:15.665731   37465 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0717 17:37:15.674989   37465 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:37:15.675046   37465 ssh_runner.go:195] Run: ls
	I0717 17:37:15.679164   37465 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:37:15.683530   37465 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:37:15.683550   37465 status.go:422] ha-174628-m03 apiserver status = Running (err=<nil>)
	I0717 17:37:15.683561   37465 status.go:257] ha-174628-m03 status: &{Name:ha-174628-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:37:15.683575   37465 status.go:255] checking status of ha-174628-m04 ...
	I0717 17:37:15.683880   37465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:15.683921   37465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:15.698334   37465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39335
	I0717 17:37:15.698768   37465 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:15.699198   37465 main.go:141] libmachine: Using API Version  1
	I0717 17:37:15.699218   37465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:15.699561   37465 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:15.699715   37465 main.go:141] libmachine: (ha-174628-m04) Calling .GetState
	I0717 17:37:15.701313   37465 status.go:330] ha-174628-m04 host status = "Running" (err=<nil>)
	I0717 17:37:15.701329   37465 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:15.701691   37465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:15.701774   37465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:15.715928   37465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39377
	I0717 17:37:15.716401   37465 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:15.716870   37465 main.go:141] libmachine: Using API Version  1
	I0717 17:37:15.716895   37465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:15.717177   37465 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:15.717451   37465 main.go:141] libmachine: (ha-174628-m04) Calling .GetIP
	I0717 17:37:15.720214   37465 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:15.720652   37465 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:15.720691   37465 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:15.720824   37465 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:15.721151   37465 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:15.721183   37465 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:15.735927   37465 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32829
	I0717 17:37:15.736274   37465 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:15.736723   37465 main.go:141] libmachine: Using API Version  1
	I0717 17:37:15.736742   37465 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:15.737077   37465 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:15.737261   37465 main.go:141] libmachine: (ha-174628-m04) Calling .DriverName
	I0717 17:37:15.737453   37465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:15.737473   37465 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHHostname
	I0717 17:37:15.740056   37465 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:15.740438   37465 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:15.740476   37465 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:15.740623   37465 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHPort
	I0717 17:37:15.740778   37465 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHKeyPath
	I0717 17:37:15.740899   37465 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHUsername
	I0717 17:37:15.741029   37465 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m04/id_rsa Username:docker}
	I0717 17:37:15.829060   37465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:15.845138   37465 status.go:257] ha-174628-m04 status: &{Name:ha-174628-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174628 -n ha-174628
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174628 logs -n 25: (1.300329849s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3227756898/001/cp-test_ha-174628-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628:/home/docker/cp-test_ha-174628-m03_ha-174628.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628 sudo cat                                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m03_ha-174628.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m02:/home/docker/cp-test_ha-174628-m03_ha-174628-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m02 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m03_ha-174628-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04:/home/docker/cp-test_ha-174628-m03_ha-174628-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m04 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m03_ha-174628-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-174628 cp testdata/cp-test.txt                                                | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3227756898/001/cp-test_ha-174628-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628:/home/docker/cp-test_ha-174628-m04_ha-174628.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628 sudo cat                                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m04_ha-174628.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m02:/home/docker/cp-test_ha-174628-m04_ha-174628-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m02 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m04_ha-174628-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03:/home/docker/cp-test_ha-174628-m04_ha-174628-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m03 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m04_ha-174628-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-174628 node stop m02 -v=7                                                     | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 17:29:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 17:29:16.325220   32725 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:29:16.325468   32725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:29:16.325475   32725 out.go:304] Setting ErrFile to fd 2...
	I0717 17:29:16.325479   32725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:29:16.325665   32725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:29:16.326208   32725 out.go:298] Setting JSON to false
	I0717 17:29:16.327076   32725 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4299,"bootTime":1721233057,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 17:29:16.327136   32725 start.go:139] virtualization: kvm guest
	I0717 17:29:16.329100   32725 out.go:177] * [ha-174628] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 17:29:16.330382   32725 notify.go:220] Checking for updates...
	I0717 17:29:16.330414   32725 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 17:29:16.331726   32725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 17:29:16.333057   32725 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:29:16.334184   32725 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:29:16.335435   32725 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 17:29:16.336607   32725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 17:29:16.338066   32725 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 17:29:16.373367   32725 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 17:29:16.374791   32725 start.go:297] selected driver: kvm2
	I0717 17:29:16.374813   32725 start.go:901] validating driver "kvm2" against <nil>
	I0717 17:29:16.374825   32725 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 17:29:16.375499   32725 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:29:16.375578   32725 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 17:29:16.390884   32725 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 17:29:16.390942   32725 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 17:29:16.391158   32725 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 17:29:16.391218   32725 cni.go:84] Creating CNI manager for ""
	I0717 17:29:16.391229   32725 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0717 17:29:16.391234   32725 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 17:29:16.391297   32725 start.go:340] cluster config:
	{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0717 17:29:16.391379   32725 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:29:16.393078   32725 out.go:177] * Starting "ha-174628" primary control-plane node in "ha-174628" cluster
	I0717 17:29:16.394342   32725 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:29:16.394375   32725 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 17:29:16.394410   32725 cache.go:56] Caching tarball of preloaded images
	I0717 17:29:16.394484   32725 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 17:29:16.394493   32725 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 17:29:16.394776   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:29:16.394795   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json: {Name:mk775845471b87c734d3c09d31cd9902fcebfad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:16.394910   32725 start.go:360] acquireMachinesLock for ha-174628: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 17:29:16.394935   32725 start.go:364] duration metric: took 14.63µs to acquireMachinesLock for "ha-174628"
	I0717 17:29:16.394952   32725 start.go:93] Provisioning new machine with config: &{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:29:16.395005   32725 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 17:29:16.396649   32725 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 17:29:16.396775   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:29:16.396806   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:29:16.410681   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I0717 17:29:16.411151   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:29:16.411676   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:29:16.411698   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:29:16.412056   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:29:16.412243   32725 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:29:16.412423   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:16.412557   32725 start.go:159] libmachine.API.Create for "ha-174628" (driver="kvm2")
	I0717 17:29:16.412586   32725 client.go:168] LocalClient.Create starting
	I0717 17:29:16.412634   32725 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 17:29:16.412669   32725 main.go:141] libmachine: Decoding PEM data...
	I0717 17:29:16.412692   32725 main.go:141] libmachine: Parsing certificate...
	I0717 17:29:16.412752   32725 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 17:29:16.412777   32725 main.go:141] libmachine: Decoding PEM data...
	I0717 17:29:16.412794   32725 main.go:141] libmachine: Parsing certificate...
	I0717 17:29:16.412821   32725 main.go:141] libmachine: Running pre-create checks...
	I0717 17:29:16.412846   32725 main.go:141] libmachine: (ha-174628) Calling .PreCreateCheck
	I0717 17:29:16.413189   32725 main.go:141] libmachine: (ha-174628) Calling .GetConfigRaw
	I0717 17:29:16.413569   32725 main.go:141] libmachine: Creating machine...
	I0717 17:29:16.413583   32725 main.go:141] libmachine: (ha-174628) Calling .Create
	I0717 17:29:16.413753   32725 main.go:141] libmachine: (ha-174628) Creating KVM machine...
	I0717 17:29:16.415006   32725 main.go:141] libmachine: (ha-174628) DBG | found existing default KVM network
	I0717 17:29:16.415670   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:16.415530   32748 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0717 17:29:16.415694   32725 main.go:141] libmachine: (ha-174628) DBG | created network xml: 
	I0717 17:29:16.415712   32725 main.go:141] libmachine: (ha-174628) DBG | <network>
	I0717 17:29:16.415727   32725 main.go:141] libmachine: (ha-174628) DBG |   <name>mk-ha-174628</name>
	I0717 17:29:16.415739   32725 main.go:141] libmachine: (ha-174628) DBG |   <dns enable='no'/>
	I0717 17:29:16.415749   32725 main.go:141] libmachine: (ha-174628) DBG |   
	I0717 17:29:16.415760   32725 main.go:141] libmachine: (ha-174628) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 17:29:16.415769   32725 main.go:141] libmachine: (ha-174628) DBG |     <dhcp>
	I0717 17:29:16.415782   32725 main.go:141] libmachine: (ha-174628) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 17:29:16.415792   32725 main.go:141] libmachine: (ha-174628) DBG |     </dhcp>
	I0717 17:29:16.415818   32725 main.go:141] libmachine: (ha-174628) DBG |   </ip>
	I0717 17:29:16.415836   32725 main.go:141] libmachine: (ha-174628) DBG |   
	I0717 17:29:16.415846   32725 main.go:141] libmachine: (ha-174628) DBG | </network>
	I0717 17:29:16.415851   32725 main.go:141] libmachine: (ha-174628) DBG | 
	I0717 17:29:16.420571   32725 main.go:141] libmachine: (ha-174628) DBG | trying to create private KVM network mk-ha-174628 192.168.39.0/24...
	I0717 17:29:16.483371   32725 main.go:141] libmachine: (ha-174628) DBG | private KVM network mk-ha-174628 192.168.39.0/24 created
	I0717 17:29:16.483396   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:16.483325   32748 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:29:16.483407   32725 main.go:141] libmachine: (ha-174628) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628 ...
	I0717 17:29:16.483423   32725 main.go:141] libmachine: (ha-174628) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 17:29:16.483520   32725 main.go:141] libmachine: (ha-174628) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 17:29:16.710849   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:16.710721   32748 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa...
	I0717 17:29:16.898456   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:16.898351   32748 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/ha-174628.rawdisk...
	I0717 17:29:16.898499   32725 main.go:141] libmachine: (ha-174628) DBG | Writing magic tar header
	I0717 17:29:16.898516   32725 main.go:141] libmachine: (ha-174628) DBG | Writing SSH key tar header
	I0717 17:29:16.898529   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:16.898460   32748 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628 ...
	I0717 17:29:16.898595   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628
	I0717 17:29:16.898615   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 17:29:16.898623   32725 main.go:141] libmachine: (ha-174628) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628 (perms=drwx------)
	I0717 17:29:16.898630   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:29:16.898636   32725 main.go:141] libmachine: (ha-174628) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 17:29:16.898768   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 17:29:16.898786   32725 main.go:141] libmachine: (ha-174628) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 17:29:16.898802   32725 main.go:141] libmachine: (ha-174628) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 17:29:16.898815   32725 main.go:141] libmachine: (ha-174628) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 17:29:16.898831   32725 main.go:141] libmachine: (ha-174628) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 17:29:16.898841   32725 main.go:141] libmachine: (ha-174628) Creating domain...
	I0717 17:29:16.898884   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 17:29:16.898904   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home/jenkins
	I0717 17:29:16.898915   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home
	I0717 17:29:16.898924   32725 main.go:141] libmachine: (ha-174628) DBG | Skipping /home - not owner
	I0717 17:29:16.899920   32725 main.go:141] libmachine: (ha-174628) define libvirt domain using xml: 
	I0717 17:29:16.899942   32725 main.go:141] libmachine: (ha-174628) <domain type='kvm'>
	I0717 17:29:16.899953   32725 main.go:141] libmachine: (ha-174628)   <name>ha-174628</name>
	I0717 17:29:16.899964   32725 main.go:141] libmachine: (ha-174628)   <memory unit='MiB'>2200</memory>
	I0717 17:29:16.899976   32725 main.go:141] libmachine: (ha-174628)   <vcpu>2</vcpu>
	I0717 17:29:16.899984   32725 main.go:141] libmachine: (ha-174628)   <features>
	I0717 17:29:16.899994   32725 main.go:141] libmachine: (ha-174628)     <acpi/>
	I0717 17:29:16.900004   32725 main.go:141] libmachine: (ha-174628)     <apic/>
	I0717 17:29:16.900011   32725 main.go:141] libmachine: (ha-174628)     <pae/>
	I0717 17:29:16.900026   32725 main.go:141] libmachine: (ha-174628)     
	I0717 17:29:16.900049   32725 main.go:141] libmachine: (ha-174628)   </features>
	I0717 17:29:16.900067   32725 main.go:141] libmachine: (ha-174628)   <cpu mode='host-passthrough'>
	I0717 17:29:16.900091   32725 main.go:141] libmachine: (ha-174628)   
	I0717 17:29:16.900110   32725 main.go:141] libmachine: (ha-174628)   </cpu>
	I0717 17:29:16.900124   32725 main.go:141] libmachine: (ha-174628)   <os>
	I0717 17:29:16.900141   32725 main.go:141] libmachine: (ha-174628)     <type>hvm</type>
	I0717 17:29:16.900152   32725 main.go:141] libmachine: (ha-174628)     <boot dev='cdrom'/>
	I0717 17:29:16.900162   32725 main.go:141] libmachine: (ha-174628)     <boot dev='hd'/>
	I0717 17:29:16.900170   32725 main.go:141] libmachine: (ha-174628)     <bootmenu enable='no'/>
	I0717 17:29:16.900180   32725 main.go:141] libmachine: (ha-174628)   </os>
	I0717 17:29:16.900188   32725 main.go:141] libmachine: (ha-174628)   <devices>
	I0717 17:29:16.900199   32725 main.go:141] libmachine: (ha-174628)     <disk type='file' device='cdrom'>
	I0717 17:29:16.900215   32725 main.go:141] libmachine: (ha-174628)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/boot2docker.iso'/>
	I0717 17:29:16.900226   32725 main.go:141] libmachine: (ha-174628)       <target dev='hdc' bus='scsi'/>
	I0717 17:29:16.900237   32725 main.go:141] libmachine: (ha-174628)       <readonly/>
	I0717 17:29:16.900246   32725 main.go:141] libmachine: (ha-174628)     </disk>
	I0717 17:29:16.900255   32725 main.go:141] libmachine: (ha-174628)     <disk type='file' device='disk'>
	I0717 17:29:16.900266   32725 main.go:141] libmachine: (ha-174628)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 17:29:16.900282   32725 main.go:141] libmachine: (ha-174628)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/ha-174628.rawdisk'/>
	I0717 17:29:16.900291   32725 main.go:141] libmachine: (ha-174628)       <target dev='hda' bus='virtio'/>
	I0717 17:29:16.900301   32725 main.go:141] libmachine: (ha-174628)     </disk>
	I0717 17:29:16.900312   32725 main.go:141] libmachine: (ha-174628)     <interface type='network'>
	I0717 17:29:16.900348   32725 main.go:141] libmachine: (ha-174628)       <source network='mk-ha-174628'/>
	I0717 17:29:16.900367   32725 main.go:141] libmachine: (ha-174628)       <model type='virtio'/>
	I0717 17:29:16.900387   32725 main.go:141] libmachine: (ha-174628)     </interface>
	I0717 17:29:16.900399   32725 main.go:141] libmachine: (ha-174628)     <interface type='network'>
	I0717 17:29:16.900405   32725 main.go:141] libmachine: (ha-174628)       <source network='default'/>
	I0717 17:29:16.900411   32725 main.go:141] libmachine: (ha-174628)       <model type='virtio'/>
	I0717 17:29:16.900417   32725 main.go:141] libmachine: (ha-174628)     </interface>
	I0717 17:29:16.900423   32725 main.go:141] libmachine: (ha-174628)     <serial type='pty'>
	I0717 17:29:16.900429   32725 main.go:141] libmachine: (ha-174628)       <target port='0'/>
	I0717 17:29:16.900435   32725 main.go:141] libmachine: (ha-174628)     </serial>
	I0717 17:29:16.900440   32725 main.go:141] libmachine: (ha-174628)     <console type='pty'>
	I0717 17:29:16.900447   32725 main.go:141] libmachine: (ha-174628)       <target type='serial' port='0'/>
	I0717 17:29:16.900452   32725 main.go:141] libmachine: (ha-174628)     </console>
	I0717 17:29:16.900457   32725 main.go:141] libmachine: (ha-174628)     <rng model='virtio'>
	I0717 17:29:16.900463   32725 main.go:141] libmachine: (ha-174628)       <backend model='random'>/dev/random</backend>
	I0717 17:29:16.900467   32725 main.go:141] libmachine: (ha-174628)     </rng>
	I0717 17:29:16.900472   32725 main.go:141] libmachine: (ha-174628)     
	I0717 17:29:16.900478   32725 main.go:141] libmachine: (ha-174628)     
	I0717 17:29:16.900494   32725 main.go:141] libmachine: (ha-174628)   </devices>
	I0717 17:29:16.900515   32725 main.go:141] libmachine: (ha-174628) </domain>
	I0717 17:29:16.900527   32725 main.go:141] libmachine: (ha-174628) 
	I0717 17:29:16.904662   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:d8:65:e3 in network default
	I0717 17:29:16.905248   32725 main.go:141] libmachine: (ha-174628) Ensuring networks are active...
	I0717 17:29:16.905287   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:16.905890   32725 main.go:141] libmachine: (ha-174628) Ensuring network default is active
	I0717 17:29:16.906159   32725 main.go:141] libmachine: (ha-174628) Ensuring network mk-ha-174628 is active
	I0717 17:29:16.906607   32725 main.go:141] libmachine: (ha-174628) Getting domain xml...
	I0717 17:29:16.907349   32725 main.go:141] libmachine: (ha-174628) Creating domain...
	I0717 17:29:18.083624   32725 main.go:141] libmachine: (ha-174628) Waiting to get IP...
	I0717 17:29:18.084593   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:18.085066   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:18.085089   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:18.085028   32748 retry.go:31] will retry after 198.059319ms: waiting for machine to come up
	I0717 17:29:18.284591   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:18.285099   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:18.285136   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:18.285044   32748 retry.go:31] will retry after 315.863924ms: waiting for machine to come up
	I0717 17:29:18.602704   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:18.603281   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:18.603312   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:18.603233   32748 retry.go:31] will retry after 365.595994ms: waiting for machine to come up
	I0717 17:29:18.970866   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:18.971206   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:18.971232   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:18.971160   32748 retry.go:31] will retry after 446.072916ms: waiting for machine to come up
	I0717 17:29:19.418679   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:19.419148   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:19.419178   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:19.419082   32748 retry.go:31] will retry after 612.766182ms: waiting for machine to come up
	I0717 17:29:20.034068   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:20.034510   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:20.034538   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:20.034463   32748 retry.go:31] will retry after 865.493851ms: waiting for machine to come up
	I0717 17:29:20.901494   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:20.901946   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:20.901983   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:20.901912   32748 retry.go:31] will retry after 784.975912ms: waiting for machine to come up
	I0717 17:29:21.688270   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:21.688649   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:21.688677   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:21.688600   32748 retry.go:31] will retry after 1.259680032s: waiting for machine to come up
	I0717 17:29:22.949945   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:22.950369   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:22.950393   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:22.950302   32748 retry.go:31] will retry after 1.397281939s: waiting for machine to come up
	I0717 17:29:24.348792   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:24.349222   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:24.349243   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:24.349144   32748 retry.go:31] will retry after 1.757971792s: waiting for machine to come up
	I0717 17:29:26.109282   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:26.109745   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:26.109783   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:26.109714   32748 retry.go:31] will retry after 1.976185642s: waiting for machine to come up
	I0717 17:29:28.087845   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:28.088250   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:28.088269   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:28.088214   32748 retry.go:31] will retry after 3.419200588s: waiting for machine to come up
	I0717 17:29:31.509234   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:31.509640   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:31.509661   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:31.509602   32748 retry.go:31] will retry after 3.616430336s: waiting for machine to come up
	I0717 17:29:35.130399   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.130939   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has current primary IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.130955   32725 main.go:141] libmachine: (ha-174628) Found IP for machine: 192.168.39.100
	I0717 17:29:35.130967   32725 main.go:141] libmachine: (ha-174628) Reserving static IP address...
	I0717 17:29:35.131422   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find host DHCP lease matching {name: "ha-174628", mac: "52:54:00:2f:44:49", ip: "192.168.39.100"} in network mk-ha-174628
	I0717 17:29:35.202327   32725 main.go:141] libmachine: (ha-174628) DBG | Getting to WaitForSSH function...
	I0717 17:29:35.202406   32725 main.go:141] libmachine: (ha-174628) Reserved static IP address: 192.168.39.100
	I0717 17:29:35.202422   32725 main.go:141] libmachine: (ha-174628) Waiting for SSH to be available...
	I0717 17:29:35.204817   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.205248   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.205276   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.205479   32725 main.go:141] libmachine: (ha-174628) DBG | Using SSH client type: external
	I0717 17:29:35.205509   32725 main.go:141] libmachine: (ha-174628) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa (-rw-------)
	I0717 17:29:35.205555   32725 main.go:141] libmachine: (ha-174628) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 17:29:35.205571   32725 main.go:141] libmachine: (ha-174628) DBG | About to run SSH command:
	I0717 17:29:35.205589   32725 main.go:141] libmachine: (ha-174628) DBG | exit 0
	I0717 17:29:35.325324   32725 main.go:141] libmachine: (ha-174628) DBG | SSH cmd err, output: <nil>: 
	I0717 17:29:35.325546   32725 main.go:141] libmachine: (ha-174628) KVM machine creation complete!
	I0717 17:29:35.325833   32725 main.go:141] libmachine: (ha-174628) Calling .GetConfigRaw
	I0717 17:29:35.326468   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:35.326701   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:35.326860   32725 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 17:29:35.326874   32725 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:29:35.328025   32725 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 17:29:35.328041   32725 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 17:29:35.328049   32725 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 17:29:35.328058   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:35.329977   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.330280   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.330297   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.330428   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:35.330596   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.330732   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.330846   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:35.331005   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:29:35.331233   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:29:35.331248   32725 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 17:29:35.427908   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:29:35.427932   32725 main.go:141] libmachine: Detecting the provisioner...
	I0717 17:29:35.427940   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:35.430644   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.430977   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.431014   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.431078   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:35.431297   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.431462   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.431630   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:35.431782   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:29:35.431950   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:29:35.431960   32725 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 17:29:35.529089   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 17:29:35.529169   32725 main.go:141] libmachine: found compatible host: buildroot
	I0717 17:29:35.529188   32725 main.go:141] libmachine: Provisioning with buildroot...
	I0717 17:29:35.529197   32725 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:29:35.529475   32725 buildroot.go:166] provisioning hostname "ha-174628"
	I0717 17:29:35.529501   32725 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:29:35.529704   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:35.532164   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.532489   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.532511   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.532612   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:35.532804   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.532982   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.533109   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:35.533270   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:29:35.533478   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:29:35.533495   32725 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174628 && echo "ha-174628" | sudo tee /etc/hostname
	I0717 17:29:35.642130   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174628
	
	I0717 17:29:35.642159   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:35.644864   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.645232   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.645256   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.645499   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:35.645684   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.645823   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.645936   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:35.646091   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:29:35.646296   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:29:35.646312   32725 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174628/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 17:29:35.752600   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:29:35.752628   32725 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 17:29:35.752668   32725 buildroot.go:174] setting up certificates
	I0717 17:29:35.752678   32725 provision.go:84] configureAuth start
	I0717 17:29:35.752689   32725 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:29:35.753010   32725 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:29:35.755301   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.755669   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.755694   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.755836   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:35.757707   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.757969   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.757991   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.758111   32725 provision.go:143] copyHostCerts
	I0717 17:29:35.758147   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:29:35.758183   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 17:29:35.758199   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:29:35.758268   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 17:29:35.758365   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:29:35.758389   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 17:29:35.758398   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:29:35.758434   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 17:29:35.758490   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:29:35.758516   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 17:29:35.758525   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:29:35.758556   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 17:29:35.758632   32725 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.ha-174628 san=[127.0.0.1 192.168.39.100 ha-174628 localhost minikube]
	I0717 17:29:35.994348   32725 provision.go:177] copyRemoteCerts
	I0717 17:29:35.994408   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 17:29:35.994434   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:35.997151   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.997448   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.997477   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.997628   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:35.997802   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.997938   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:35.998093   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:29:36.079128   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 17:29:36.079215   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 17:29:36.102149   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 17:29:36.102225   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 17:29:36.123207   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 17:29:36.123268   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 17:29:36.143905   32725 provision.go:87] duration metric: took 391.212994ms to configureAuth
	I0717 17:29:36.143927   32725 buildroot.go:189] setting minikube options for container-runtime
	I0717 17:29:36.144095   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:29:36.144175   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:36.147235   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.147626   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.147652   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.147806   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:36.148011   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.148200   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.148358   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:36.148671   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:29:36.148837   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:29:36.148853   32725 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 17:29:36.389437   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 17:29:36.389459   32725 main.go:141] libmachine: Checking connection to Docker...
	I0717 17:29:36.389469   32725 main.go:141] libmachine: (ha-174628) Calling .GetURL
	I0717 17:29:36.391084   32725 main.go:141] libmachine: (ha-174628) DBG | Using libvirt version 6000000
	I0717 17:29:36.393220   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.393489   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.393507   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.393720   32725 main.go:141] libmachine: Docker is up and running!
	I0717 17:29:36.393740   32725 main.go:141] libmachine: Reticulating splines...
	I0717 17:29:36.393747   32725 client.go:171] duration metric: took 19.981151074s to LocalClient.Create
	I0717 17:29:36.393772   32725 start.go:167] duration metric: took 19.981216102s to libmachine.API.Create "ha-174628"
	I0717 17:29:36.393782   32725 start.go:293] postStartSetup for "ha-174628" (driver="kvm2")
	I0717 17:29:36.393795   32725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 17:29:36.393816   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:36.394051   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 17:29:36.394082   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:36.396019   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.396337   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.396360   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.396489   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:36.396680   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.396845   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:36.396988   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:29:36.474390   32725 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 17:29:36.478254   32725 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 17:29:36.478277   32725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 17:29:36.478351   32725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 17:29:36.478437   32725 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 17:29:36.478450   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /etc/ssl/certs/215772.pem
	I0717 17:29:36.478563   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 17:29:36.487094   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:29:36.508316   32725 start.go:296] duration metric: took 114.523323ms for postStartSetup
	I0717 17:29:36.508386   32725 main.go:141] libmachine: (ha-174628) Calling .GetConfigRaw
	I0717 17:29:36.508909   32725 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:29:36.511347   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.511701   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.511728   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.511910   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:29:36.512089   32725 start.go:128] duration metric: took 20.117074786s to createHost
	I0717 17:29:36.512112   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:36.514288   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.514596   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.514616   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.514768   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:36.514934   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.515092   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.515211   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:36.515345   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:29:36.515497   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:29:36.515509   32725 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 17:29:36.613086   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237376.586010587
	
	I0717 17:29:36.613107   32725 fix.go:216] guest clock: 1721237376.586010587
	I0717 17:29:36.613114   32725 fix.go:229] Guest: 2024-07-17 17:29:36.586010587 +0000 UTC Remote: 2024-07-17 17:29:36.512100213 +0000 UTC m=+20.219026136 (delta=73.910374ms)
	I0717 17:29:36.613144   32725 fix.go:200] guest clock delta is within tolerance: 73.910374ms
	I0717 17:29:36.613149   32725 start.go:83] releasing machines lock for "ha-174628", held for 20.218205036s
	I0717 17:29:36.613166   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:36.613425   32725 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:29:36.615673   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.615986   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.616011   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.616160   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:36.616571   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:36.616781   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:36.616853   32725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 17:29:36.616884   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:36.617024   32725 ssh_runner.go:195] Run: cat /version.json
	I0717 17:29:36.617044   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:36.619217   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.619289   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.619604   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.619630   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.619656   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.619677   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.619888   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:36.619967   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:36.620040   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.620118   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.620169   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:36.620223   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:36.620267   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:29:36.620350   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:29:36.723966   32725 ssh_runner.go:195] Run: systemctl --version
	I0717 17:29:36.729560   32725 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 17:29:36.880903   32725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 17:29:36.886275   32725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 17:29:36.886329   32725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 17:29:36.901625   32725 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 17:29:36.901651   32725 start.go:495] detecting cgroup driver to use...
	I0717 17:29:36.901710   32725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 17:29:36.917240   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 17:29:36.930316   32725 docker.go:217] disabling cri-docker service (if available) ...
	I0717 17:29:36.930375   32725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 17:29:36.943285   32725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 17:29:36.956166   32725 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 17:29:37.080316   32725 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 17:29:37.234412   32725 docker.go:233] disabling docker service ...
	I0717 17:29:37.234487   32725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 17:29:37.247741   32725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 17:29:37.259812   32725 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 17:29:37.368136   32725 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 17:29:37.473852   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 17:29:37.486903   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 17:29:37.503326   32725 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 17:29:37.503378   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.512631   32725 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 17:29:37.512685   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.521938   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.531128   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.540253   32725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 17:29:37.549895   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.559224   32725 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.575215   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.585513   32725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 17:29:37.594121   32725 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 17:29:37.594176   32725 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 17:29:37.605708   32725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 17:29:37.614736   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:29:37.728181   32725 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 17:29:37.860375   32725 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 17:29:37.860465   32725 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 17:29:37.864661   32725 start.go:563] Will wait 60s for crictl version
	I0717 17:29:37.864712   32725 ssh_runner.go:195] Run: which crictl
	I0717 17:29:37.868011   32725 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 17:29:37.903302   32725 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 17:29:37.903407   32725 ssh_runner.go:195] Run: crio --version
	I0717 17:29:37.930645   32725 ssh_runner.go:195] Run: crio --version
	I0717 17:29:37.958113   32725 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 17:29:37.959294   32725 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:29:37.961924   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:37.962213   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:37.962231   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:37.962456   32725 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 17:29:37.966414   32725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:29:37.978440   32725 kubeadm.go:883] updating cluster {Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 17:29:37.978537   32725 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:29:37.978582   32725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 17:29:38.007842   32725 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 17:29:38.007955   32725 ssh_runner.go:195] Run: which lz4
	I0717 17:29:38.011775   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 17:29:38.011872   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 17:29:38.015704   32725 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 17:29:38.015736   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 17:29:39.213244   32725 crio.go:462] duration metric: took 1.201400295s to copy over tarball
	I0717 17:29:39.213306   32725 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 17:29:41.331453   32725 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.118121996s)
	I0717 17:29:41.331482   32725 crio.go:469] duration metric: took 2.118216371s to extract the tarball
	I0717 17:29:41.331489   32725 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 17:29:41.368676   32725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 17:29:41.409761   32725 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 17:29:41.409780   32725 cache_images.go:84] Images are preloaded, skipping loading
	I0717 17:29:41.409787   32725 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.30.2 crio true true} ...
	I0717 17:29:41.409910   32725 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 17:29:41.409976   32725 ssh_runner.go:195] Run: crio config
	I0717 17:29:41.453071   32725 cni.go:84] Creating CNI manager for ""
	I0717 17:29:41.453088   32725 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 17:29:41.453096   32725 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 17:29:41.453116   32725 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174628 NodeName:ha-174628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 17:29:41.453274   32725 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174628"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 17:29:41.453297   32725 kube-vip.go:115] generating kube-vip config ...
	I0717 17:29:41.453345   32725 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 17:29:41.468281   32725 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 17:29:41.468385   32725 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0717 17:29:41.468437   32725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 17:29:41.477282   32725 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 17:29:41.477353   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 17:29:41.485995   32725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 17:29:41.501698   32725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 17:29:41.516538   32725 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 17:29:41.531329   32725 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0717 17:29:41.546255   32725 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 17:29:41.549735   32725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:29:41.560619   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:29:41.682551   32725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:29:41.698891   32725 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628 for IP: 192.168.39.100
	I0717 17:29:41.698912   32725 certs.go:194] generating shared ca certs ...
	I0717 17:29:41.698928   32725 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:41.699093   32725 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 17:29:41.699134   32725 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 17:29:41.699144   32725 certs.go:256] generating profile certs ...
	I0717 17:29:41.699195   32725 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key
	I0717 17:29:41.699210   32725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.crt with IP's: []
	I0717 17:29:41.761284   32725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.crt ...
	I0717 17:29:41.761310   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.crt: {Name:mkaa550cef907e86645a1b32cef4325a9904274f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:41.761468   32725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key ...
	I0717 17:29:41.761478   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key: {Name:mk93234ccb835983ded185c78683a2d2955acd08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:41.761558   32725 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.1f3a6050
	I0717 17:29:41.761592   32725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.1f3a6050 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.254]
	I0717 17:29:41.926788   32725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.1f3a6050 ...
	I0717 17:29:41.926815   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.1f3a6050: {Name:mk6c2c70563a3c319a0aa70f1dbcd8aa0b83e8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:41.926980   32725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.1f3a6050 ...
	I0717 17:29:41.926992   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.1f3a6050: {Name:mka7c12426d9818e100dfaa475f8fa1cd5c6ed78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:41.927072   32725 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.1f3a6050 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt
	I0717 17:29:41.927156   32725 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.1f3a6050 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key
	I0717 17:29:41.927212   32725 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key
	I0717 17:29:41.927226   32725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt with IP's: []
	I0717 17:29:42.096708   32725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt ...
	I0717 17:29:42.096736   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt: {Name:mk296ad0cadac71acfe92f700f1e2191c1858ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:42.096881   32725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key ...
	I0717 17:29:42.096890   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key: {Name:mkfb2f7a0dce8485740f966f03539930631a194b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:42.096969   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 17:29:42.096985   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 17:29:42.096995   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 17:29:42.097005   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 17:29:42.097017   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 17:29:42.097027   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 17:29:42.097039   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 17:29:42.097048   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 17:29:42.097092   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 17:29:42.097125   32725 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 17:29:42.097134   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 17:29:42.097154   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 17:29:42.097177   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 17:29:42.097199   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 17:29:42.097278   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:29:42.097312   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /usr/share/ca-certificates/215772.pem
	I0717 17:29:42.097325   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:29:42.097338   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem -> /usr/share/ca-certificates/21577.pem
	I0717 17:29:42.097949   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 17:29:42.122140   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 17:29:42.143829   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 17:29:42.165164   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 17:29:42.186851   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 17:29:42.208088   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 17:29:42.229178   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 17:29:42.251527   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 17:29:42.272911   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 17:29:42.294253   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 17:29:42.317178   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 17:29:42.338795   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 17:29:42.354136   32725 ssh_runner.go:195] Run: openssl version
	I0717 17:29:42.359686   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 17:29:42.369732   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 17:29:42.373701   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 17:29:42.373759   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 17:29:42.379183   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 17:29:42.389308   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 17:29:42.399571   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:29:42.403780   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:29:42.403834   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:29:42.409017   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 17:29:42.419127   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 17:29:42.429155   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 17:29:42.433281   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 17:29:42.433338   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 17:29:42.438609   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 17:29:42.448527   32725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 17:29:42.452244   32725 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 17:29:42.452306   32725 kubeadm.go:392] StartCluster: {Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:29:42.452388   32725 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 17:29:42.452437   32725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 17:29:42.495018   32725 cri.go:89] found id: ""
	I0717 17:29:42.495097   32725 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 17:29:42.507394   32725 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 17:29:42.517759   32725 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 17:29:42.529392   32725 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 17:29:42.529413   32725 kubeadm.go:157] found existing configuration files:
	
	I0717 17:29:42.529463   32725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 17:29:42.538875   32725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 17:29:42.538935   32725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 17:29:42.547614   32725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 17:29:42.555978   32725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 17:29:42.556042   32725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 17:29:42.564677   32725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 17:29:42.573054   32725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 17:29:42.573147   32725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 17:29:42.582014   32725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 17:29:42.590217   32725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 17:29:42.590268   32725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 17:29:42.598706   32725 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 17:29:42.820351   32725 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 17:29:53.880142   32725 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 17:29:53.880255   32725 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 17:29:53.880376   32725 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 17:29:53.880488   32725 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 17:29:53.880610   32725 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 17:29:53.880732   32725 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 17:29:53.882105   32725 out.go:204]   - Generating certificates and keys ...
	I0717 17:29:53.882181   32725 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 17:29:53.882251   32725 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 17:29:53.882331   32725 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 17:29:53.882432   32725 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 17:29:53.882528   32725 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 17:29:53.882603   32725 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 17:29:53.882689   32725 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 17:29:53.882811   32725 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-174628 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0717 17:29:53.882860   32725 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 17:29:53.882975   32725 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-174628 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0717 17:29:53.883050   32725 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 17:29:53.883123   32725 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 17:29:53.883183   32725 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 17:29:53.883251   32725 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 17:29:53.883318   32725 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 17:29:53.883372   32725 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 17:29:53.883435   32725 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 17:29:53.883516   32725 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 17:29:53.883568   32725 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 17:29:53.883639   32725 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 17:29:53.883697   32725 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 17:29:53.885138   32725 out.go:204]   - Booting up control plane ...
	I0717 17:29:53.885232   32725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 17:29:53.885326   32725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 17:29:53.885423   32725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 17:29:53.885542   32725 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 17:29:53.885635   32725 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 17:29:53.885670   32725 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 17:29:53.885778   32725 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 17:29:53.885840   32725 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 17:29:53.885889   32725 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.398722ms
	I0717 17:29:53.885968   32725 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 17:29:53.886030   32725 kubeadm.go:310] [api-check] The API server is healthy after 5.960239011s
	I0717 17:29:53.886116   32725 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 17:29:53.886220   32725 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 17:29:53.886275   32725 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 17:29:53.886420   32725 kubeadm.go:310] [mark-control-plane] Marking the node ha-174628 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 17:29:53.886468   32725 kubeadm.go:310] [bootstrap-token] Using token: wck5nb.rxemfngs4xdsbvfr
	I0717 17:29:53.887717   32725 out.go:204]   - Configuring RBAC rules ...
	I0717 17:29:53.887821   32725 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 17:29:53.887899   32725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 17:29:53.888015   32725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 17:29:53.888145   32725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 17:29:53.888284   32725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 17:29:53.888378   32725 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 17:29:53.888511   32725 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 17:29:53.888575   32725 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 17:29:53.888641   32725 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 17:29:53.888649   32725 kubeadm.go:310] 
	I0717 17:29:53.888717   32725 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 17:29:53.888725   32725 kubeadm.go:310] 
	I0717 17:29:53.888786   32725 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 17:29:53.888792   32725 kubeadm.go:310] 
	I0717 17:29:53.888820   32725 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 17:29:53.888871   32725 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 17:29:53.888914   32725 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 17:29:53.888919   32725 kubeadm.go:310] 
	I0717 17:29:53.889001   32725 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 17:29:53.889014   32725 kubeadm.go:310] 
	I0717 17:29:53.889057   32725 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 17:29:53.889063   32725 kubeadm.go:310] 
	I0717 17:29:53.889114   32725 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 17:29:53.889192   32725 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 17:29:53.889284   32725 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 17:29:53.889293   32725 kubeadm.go:310] 
	I0717 17:29:53.889399   32725 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 17:29:53.889464   32725 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 17:29:53.889470   32725 kubeadm.go:310] 
	I0717 17:29:53.889542   32725 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wck5nb.rxemfngs4xdsbvfr \
	I0717 17:29:53.889637   32725 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 17:29:53.889658   32725 kubeadm.go:310] 	--control-plane 
	I0717 17:29:53.889663   32725 kubeadm.go:310] 
	I0717 17:29:53.889733   32725 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 17:29:53.889739   32725 kubeadm.go:310] 
	I0717 17:29:53.889826   32725 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wck5nb.rxemfngs4xdsbvfr \
	I0717 17:29:53.889948   32725 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 17:29:53.889963   32725 cni.go:84] Creating CNI manager for ""
	I0717 17:29:53.889968   32725 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 17:29:53.891370   32725 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 17:29:53.892517   32725 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 17:29:53.897907   32725 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 17:29:53.897927   32725 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 17:29:53.915259   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 17:29:54.226731   32725 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 17:29:54.226803   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:54.226833   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174628 minikube.k8s.io/updated_at=2024_07_17T17_29_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=ha-174628 minikube.k8s.io/primary=true
	I0717 17:29:54.244180   32725 ops.go:34] apiserver oom_adj: -16
	I0717 17:29:54.402128   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:54.902588   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:55.403044   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:55.903030   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:56.402632   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:56.902925   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:57.403211   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:57.902997   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:58.402988   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:58.902391   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:59.402474   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:59.902649   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:00.402945   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:00.902998   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:01.402544   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:01.902870   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:02.402672   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:02.902952   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:03.402546   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:03.902512   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:04.402639   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:04.902292   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:05.402402   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:05.903191   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:05.991444   32725 kubeadm.go:1113] duration metric: took 11.764692753s to wait for elevateKubeSystemPrivileges
	I0717 17:30:05.991485   32725 kubeadm.go:394] duration metric: took 23.539184464s to StartCluster
	I0717 17:30:05.991509   32725 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:30:05.991584   32725 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:30:05.992296   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:30:05.992512   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 17:30:05.992540   32725 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 17:30:05.992585   32725 addons.go:69] Setting storage-provisioner=true in profile "ha-174628"
	I0717 17:30:05.992615   32725 addons.go:234] Setting addon storage-provisioner=true in "ha-174628"
	I0717 17:30:05.992523   32725 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:30:05.992628   32725 addons.go:69] Setting default-storageclass=true in profile "ha-174628"
	I0717 17:30:05.992639   32725 start.go:241] waiting for startup goroutines ...
	I0717 17:30:05.992643   32725 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:30:05.992660   32725 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-174628"
	I0717 17:30:05.992726   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:30:05.993053   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:05.993084   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:05.993084   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:05.993106   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:06.008381   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33189
	I0717 17:30:06.008393   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38795
	I0717 17:30:06.008836   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:06.008956   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:06.009351   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:06.009376   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:06.009469   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:06.009494   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:06.009713   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:06.009806   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:06.009881   32725 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:30:06.010383   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:06.010415   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:06.012035   32725 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:30:06.012365   32725 kapi.go:59] client config for ha-174628: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.crt", KeyFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key", CAFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 17:30:06.012853   32725 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 17:30:06.013001   32725 addons.go:234] Setting addon default-storageclass=true in "ha-174628"
	I0717 17:30:06.013034   32725 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:30:06.013276   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:06.013299   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:06.025434   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0717 17:30:06.025898   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:06.026399   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:06.026424   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:06.026805   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:06.026990   32725 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:30:06.027770   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0717 17:30:06.028205   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:06.028641   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:06.028660   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:06.028821   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:30:06.028996   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:06.029418   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:06.029451   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:06.031030   32725 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 17:30:06.032429   32725 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 17:30:06.032450   32725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 17:30:06.032470   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:30:06.035519   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:30:06.036037   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:30:06.036074   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:30:06.036167   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:30:06.036401   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:30:06.036617   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:30:06.036775   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:30:06.046443   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0717 17:30:06.046944   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:06.047416   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:06.047439   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:06.047776   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:06.047965   32725 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:30:06.049498   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:30:06.049688   32725 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 17:30:06.049704   32725 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 17:30:06.049722   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:30:06.052710   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:30:06.053337   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:30:06.053364   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:30:06.053529   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:30:06.053684   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:30:06.053842   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:30:06.053999   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:30:06.118148   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 17:30:06.249823   32725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 17:30:06.271489   32725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 17:30:06.632818   32725 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 17:30:06.906746   32725 main.go:141] libmachine: Making call to close driver server
	I0717 17:30:06.906776   32725 main.go:141] libmachine: (ha-174628) Calling .Close
	I0717 17:30:06.906820   32725 main.go:141] libmachine: Making call to close driver server
	I0717 17:30:06.906975   32725 main.go:141] libmachine: (ha-174628) Calling .Close
	I0717 17:30:06.907456   32725 main.go:141] libmachine: (ha-174628) DBG | Closing plugin on server side
	I0717 17:30:06.907549   32725 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:30:06.907561   32725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:30:06.907579   32725 main.go:141] libmachine: Making call to close driver server
	I0717 17:30:06.907587   32725 main.go:141] libmachine: (ha-174628) Calling .Close
	I0717 17:30:06.907975   32725 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:30:06.908006   32725 main.go:141] libmachine: (ha-174628) DBG | Closing plugin on server side
	I0717 17:30:06.908028   32725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:30:06.908039   32725 main.go:141] libmachine: Making call to close driver server
	I0717 17:30:06.908053   32725 main.go:141] libmachine: (ha-174628) Calling .Close
	I0717 17:30:06.908297   32725 main.go:141] libmachine: (ha-174628) DBG | Closing plugin on server side
	I0717 17:30:06.908369   32725 main.go:141] libmachine: (ha-174628) DBG | Closing plugin on server side
	I0717 17:30:06.908412   32725 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:30:06.908420   32725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:30:06.908425   32725 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:30:06.908441   32725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:30:06.908535   32725 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0717 17:30:06.908547   32725 round_trippers.go:469] Request Headers:
	I0717 17:30:06.908564   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:30:06.908578   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:30:06.919354   32725 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 17:30:06.919857   32725 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0717 17:30:06.919870   32725 round_trippers.go:469] Request Headers:
	I0717 17:30:06.919877   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:30:06.919880   32725 round_trippers.go:473]     Content-Type: application/json
	I0717 17:30:06.919883   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:30:06.922413   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:30:06.922543   32725 main.go:141] libmachine: Making call to close driver server
	I0717 17:30:06.922554   32725 main.go:141] libmachine: (ha-174628) Calling .Close
	I0717 17:30:06.922830   32725 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:30:06.922848   32725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:30:06.925570   32725 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 17:30:06.927276   32725 addons.go:510] duration metric: took 934.730792ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 17:30:06.927316   32725 start.go:246] waiting for cluster config update ...
	I0717 17:30:06.927331   32725 start.go:255] writing updated cluster config ...
	I0717 17:30:06.929157   32725 out.go:177] 
	I0717 17:30:06.930559   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:30:06.930658   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:30:06.932287   32725 out.go:177] * Starting "ha-174628-m02" control-plane node in "ha-174628" cluster
	I0717 17:30:06.933735   32725 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:30:06.933762   32725 cache.go:56] Caching tarball of preloaded images
	I0717 17:30:06.933852   32725 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 17:30:06.933872   32725 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 17:30:06.933944   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:30:06.934109   32725 start.go:360] acquireMachinesLock for ha-174628-m02: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 17:30:06.934162   32725 start.go:364] duration metric: took 32.269µs to acquireMachinesLock for "ha-174628-m02"
	I0717 17:30:06.934186   32725 start.go:93] Provisioning new machine with config: &{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:30:06.934266   32725 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0717 17:30:06.935760   32725 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 17:30:06.935850   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:06.935883   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:06.950705   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42145
	I0717 17:30:06.951110   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:06.951621   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:06.951637   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:06.951971   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:06.952163   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetMachineName
	I0717 17:30:06.952334   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:06.952481   32725 start.go:159] libmachine.API.Create for "ha-174628" (driver="kvm2")
	I0717 17:30:06.952503   32725 client.go:168] LocalClient.Create starting
	I0717 17:30:06.952538   32725 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 17:30:06.952574   32725 main.go:141] libmachine: Decoding PEM data...
	I0717 17:30:06.952594   32725 main.go:141] libmachine: Parsing certificate...
	I0717 17:30:06.952651   32725 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 17:30:06.952669   32725 main.go:141] libmachine: Decoding PEM data...
	I0717 17:30:06.952680   32725 main.go:141] libmachine: Parsing certificate...
	I0717 17:30:06.952698   32725 main.go:141] libmachine: Running pre-create checks...
	I0717 17:30:06.952706   32725 main.go:141] libmachine: (ha-174628-m02) Calling .PreCreateCheck
	I0717 17:30:06.952893   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetConfigRaw
	I0717 17:30:06.953325   32725 main.go:141] libmachine: Creating machine...
	I0717 17:30:06.953341   32725 main.go:141] libmachine: (ha-174628-m02) Calling .Create
	I0717 17:30:06.953450   32725 main.go:141] libmachine: (ha-174628-m02) Creating KVM machine...
	I0717 17:30:06.954437   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found existing default KVM network
	I0717 17:30:06.954588   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found existing private KVM network mk-ha-174628
	I0717 17:30:06.954768   32725 main.go:141] libmachine: (ha-174628-m02) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02 ...
	I0717 17:30:06.954796   32725 main.go:141] libmachine: (ha-174628-m02) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 17:30:06.954814   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:06.954714   33086 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:30:06.954917   32725 main.go:141] libmachine: (ha-174628-m02) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 17:30:07.182542   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:07.182425   33086 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa...
	I0717 17:30:07.521008   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:07.520862   33086 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/ha-174628-m02.rawdisk...
	I0717 17:30:07.521037   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Writing magic tar header
	I0717 17:30:07.521049   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Writing SSH key tar header
	I0717 17:30:07.521065   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:07.520995   33086 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02 ...
	I0717 17:30:07.521084   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02
	I0717 17:30:07.521168   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 17:30:07.521196   32725 main.go:141] libmachine: (ha-174628-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02 (perms=drwx------)
	I0717 17:30:07.521207   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:30:07.521226   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 17:30:07.521233   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 17:30:07.521240   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home/jenkins
	I0717 17:30:07.521250   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home
	I0717 17:30:07.521261   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Skipping /home - not owner
	I0717 17:30:07.521289   32725 main.go:141] libmachine: (ha-174628-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 17:30:07.521306   32725 main.go:141] libmachine: (ha-174628-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 17:30:07.521317   32725 main.go:141] libmachine: (ha-174628-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 17:30:07.521329   32725 main.go:141] libmachine: (ha-174628-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 17:30:07.521339   32725 main.go:141] libmachine: (ha-174628-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 17:30:07.521344   32725 main.go:141] libmachine: (ha-174628-m02) Creating domain...
	I0717 17:30:07.522281   32725 main.go:141] libmachine: (ha-174628-m02) define libvirt domain using xml: 
	I0717 17:30:07.522306   32725 main.go:141] libmachine: (ha-174628-m02) <domain type='kvm'>
	I0717 17:30:07.522318   32725 main.go:141] libmachine: (ha-174628-m02)   <name>ha-174628-m02</name>
	I0717 17:30:07.522330   32725 main.go:141] libmachine: (ha-174628-m02)   <memory unit='MiB'>2200</memory>
	I0717 17:30:07.522344   32725 main.go:141] libmachine: (ha-174628-m02)   <vcpu>2</vcpu>
	I0717 17:30:07.522356   32725 main.go:141] libmachine: (ha-174628-m02)   <features>
	I0717 17:30:07.522384   32725 main.go:141] libmachine: (ha-174628-m02)     <acpi/>
	I0717 17:30:07.522408   32725 main.go:141] libmachine: (ha-174628-m02)     <apic/>
	I0717 17:30:07.522418   32725 main.go:141] libmachine: (ha-174628-m02)     <pae/>
	I0717 17:30:07.522428   32725 main.go:141] libmachine: (ha-174628-m02)     
	I0717 17:30:07.522438   32725 main.go:141] libmachine: (ha-174628-m02)   </features>
	I0717 17:30:07.522451   32725 main.go:141] libmachine: (ha-174628-m02)   <cpu mode='host-passthrough'>
	I0717 17:30:07.522462   32725 main.go:141] libmachine: (ha-174628-m02)   
	I0717 17:30:07.522470   32725 main.go:141] libmachine: (ha-174628-m02)   </cpu>
	I0717 17:30:07.522476   32725 main.go:141] libmachine: (ha-174628-m02)   <os>
	I0717 17:30:07.522484   32725 main.go:141] libmachine: (ha-174628-m02)     <type>hvm</type>
	I0717 17:30:07.522492   32725 main.go:141] libmachine: (ha-174628-m02)     <boot dev='cdrom'/>
	I0717 17:30:07.522497   32725 main.go:141] libmachine: (ha-174628-m02)     <boot dev='hd'/>
	I0717 17:30:07.522506   32725 main.go:141] libmachine: (ha-174628-m02)     <bootmenu enable='no'/>
	I0717 17:30:07.522519   32725 main.go:141] libmachine: (ha-174628-m02)   </os>
	I0717 17:30:07.522538   32725 main.go:141] libmachine: (ha-174628-m02)   <devices>
	I0717 17:30:07.522549   32725 main.go:141] libmachine: (ha-174628-m02)     <disk type='file' device='cdrom'>
	I0717 17:30:07.522567   32725 main.go:141] libmachine: (ha-174628-m02)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/boot2docker.iso'/>
	I0717 17:30:07.522578   32725 main.go:141] libmachine: (ha-174628-m02)       <target dev='hdc' bus='scsi'/>
	I0717 17:30:07.522588   32725 main.go:141] libmachine: (ha-174628-m02)       <readonly/>
	I0717 17:30:07.522601   32725 main.go:141] libmachine: (ha-174628-m02)     </disk>
	I0717 17:30:07.522611   32725 main.go:141] libmachine: (ha-174628-m02)     <disk type='file' device='disk'>
	I0717 17:30:07.522628   32725 main.go:141] libmachine: (ha-174628-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 17:30:07.522643   32725 main.go:141] libmachine: (ha-174628-m02)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/ha-174628-m02.rawdisk'/>
	I0717 17:30:07.522673   32725 main.go:141] libmachine: (ha-174628-m02)       <target dev='hda' bus='virtio'/>
	I0717 17:30:07.522702   32725 main.go:141] libmachine: (ha-174628-m02)     </disk>
	I0717 17:30:07.522716   32725 main.go:141] libmachine: (ha-174628-m02)     <interface type='network'>
	I0717 17:30:07.522727   32725 main.go:141] libmachine: (ha-174628-m02)       <source network='mk-ha-174628'/>
	I0717 17:30:07.522771   32725 main.go:141] libmachine: (ha-174628-m02)       <model type='virtio'/>
	I0717 17:30:07.522783   32725 main.go:141] libmachine: (ha-174628-m02)     </interface>
	I0717 17:30:07.522794   32725 main.go:141] libmachine: (ha-174628-m02)     <interface type='network'>
	I0717 17:30:07.522804   32725 main.go:141] libmachine: (ha-174628-m02)       <source network='default'/>
	I0717 17:30:07.522815   32725 main.go:141] libmachine: (ha-174628-m02)       <model type='virtio'/>
	I0717 17:30:07.522825   32725 main.go:141] libmachine: (ha-174628-m02)     </interface>
	I0717 17:30:07.522835   32725 main.go:141] libmachine: (ha-174628-m02)     <serial type='pty'>
	I0717 17:30:07.522845   32725 main.go:141] libmachine: (ha-174628-m02)       <target port='0'/>
	I0717 17:30:07.522873   32725 main.go:141] libmachine: (ha-174628-m02)     </serial>
	I0717 17:30:07.522894   32725 main.go:141] libmachine: (ha-174628-m02)     <console type='pty'>
	I0717 17:30:07.522907   32725 main.go:141] libmachine: (ha-174628-m02)       <target type='serial' port='0'/>
	I0717 17:30:07.522918   32725 main.go:141] libmachine: (ha-174628-m02)     </console>
	I0717 17:30:07.522930   32725 main.go:141] libmachine: (ha-174628-m02)     <rng model='virtio'>
	I0717 17:30:07.522944   32725 main.go:141] libmachine: (ha-174628-m02)       <backend model='random'>/dev/random</backend>
	I0717 17:30:07.522954   32725 main.go:141] libmachine: (ha-174628-m02)     </rng>
	I0717 17:30:07.522963   32725 main.go:141] libmachine: (ha-174628-m02)     
	I0717 17:30:07.522974   32725 main.go:141] libmachine: (ha-174628-m02)     
	I0717 17:30:07.522983   32725 main.go:141] libmachine: (ha-174628-m02)   </devices>
	I0717 17:30:07.522993   32725 main.go:141] libmachine: (ha-174628-m02) </domain>
	I0717 17:30:07.523003   32725 main.go:141] libmachine: (ha-174628-m02) 
	I0717 17:30:07.529602   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:6e:7d:7d in network default
	I0717 17:30:07.530175   32725 main.go:141] libmachine: (ha-174628-m02) Ensuring networks are active...
	I0717 17:30:07.530198   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:07.530797   32725 main.go:141] libmachine: (ha-174628-m02) Ensuring network default is active
	I0717 17:30:07.531120   32725 main.go:141] libmachine: (ha-174628-m02) Ensuring network mk-ha-174628 is active
	I0717 17:30:07.531478   32725 main.go:141] libmachine: (ha-174628-m02) Getting domain xml...
	I0717 17:30:07.532194   32725 main.go:141] libmachine: (ha-174628-m02) Creating domain...
	I0717 17:30:08.735024   32725 main.go:141] libmachine: (ha-174628-m02) Waiting to get IP...
	I0717 17:30:08.735908   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:08.736309   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:08.736374   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:08.736278   33086 retry.go:31] will retry after 254.757459ms: waiting for machine to come up
	I0717 17:30:08.992936   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:08.993338   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:08.993368   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:08.993300   33086 retry.go:31] will retry after 349.817685ms: waiting for machine to come up
	I0717 17:30:09.345304   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:09.346035   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:09.346059   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:09.345976   33086 retry.go:31] will retry after 431.850456ms: waiting for machine to come up
	I0717 17:30:09.779407   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:09.779903   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:09.779929   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:09.779875   33086 retry.go:31] will retry after 521.386512ms: waiting for machine to come up
	I0717 17:30:10.303006   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:10.303441   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:10.303462   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:10.303404   33086 retry.go:31] will retry after 654.88693ms: waiting for machine to come up
	I0717 17:30:10.960250   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:10.960665   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:10.960695   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:10.960615   33086 retry.go:31] will retry after 812.663457ms: waiting for machine to come up
	I0717 17:30:11.774425   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:11.774828   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:11.774848   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:11.774757   33086 retry.go:31] will retry after 909.070997ms: waiting for machine to come up
	I0717 17:30:12.684873   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:12.685282   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:12.685305   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:12.685240   33086 retry.go:31] will retry after 1.4060659s: waiting for machine to come up
	I0717 17:30:14.093810   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:14.094221   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:14.094246   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:14.094188   33086 retry.go:31] will retry after 1.617063869s: waiting for machine to come up
	I0717 17:30:15.714144   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:15.714673   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:15.714699   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:15.714626   33086 retry.go:31] will retry after 1.560364715s: waiting for machine to come up
	I0717 17:30:17.276818   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:17.277355   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:17.277380   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:17.277305   33086 retry.go:31] will retry after 1.983112853s: waiting for machine to come up
	I0717 17:30:19.263384   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:19.263769   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:19.263792   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:19.263735   33086 retry.go:31] will retry after 2.937547634s: waiting for machine to come up
	I0717 17:30:22.202387   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:22.202878   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:22.202902   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:22.202827   33086 retry.go:31] will retry after 4.241030651s: waiting for machine to come up
	I0717 17:30:26.445900   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.446246   32725 main.go:141] libmachine: (ha-174628-m02) Found IP for machine: 192.168.39.97
	I0717 17:30:26.446268   32725 main.go:141] libmachine: (ha-174628-m02) Reserving static IP address...
	I0717 17:30:26.446279   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has current primary IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.446684   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find host DHCP lease matching {name: "ha-174628-m02", mac: "52:54:00:26:10:53", ip: "192.168.39.97"} in network mk-ha-174628
	I0717 17:30:26.518198   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Getting to WaitForSSH function...
	I0717 17:30:26.518220   32725 main.go:141] libmachine: (ha-174628-m02) Reserved static IP address: 192.168.39.97
	I0717 17:30:26.518233   32725 main.go:141] libmachine: (ha-174628-m02) Waiting for SSH to be available...
	I0717 17:30:26.520920   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.521410   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:minikube Clientid:01:52:54:00:26:10:53}
	I0717 17:30:26.521444   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.521557   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Using SSH client type: external
	I0717 17:30:26.521586   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa (-rw-------)
	I0717 17:30:26.521615   32725 main.go:141] libmachine: (ha-174628-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 17:30:26.521630   32725 main.go:141] libmachine: (ha-174628-m02) DBG | About to run SSH command:
	I0717 17:30:26.521644   32725 main.go:141] libmachine: (ha-174628-m02) DBG | exit 0
	I0717 17:30:26.648877   32725 main.go:141] libmachine: (ha-174628-m02) DBG | SSH cmd err, output: <nil>: 
	I0717 17:30:26.649148   32725 main.go:141] libmachine: (ha-174628-m02) KVM machine creation complete!
	I0717 17:30:26.649484   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetConfigRaw
	I0717 17:30:26.650078   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:26.650244   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:26.650403   32725 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 17:30:26.650416   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetState
	I0717 17:30:26.651631   32725 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 17:30:26.651646   32725 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 17:30:26.651652   32725 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 17:30:26.651657   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:26.653793   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.654110   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:26.654135   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.654284   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:26.654463   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:26.654621   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:26.654759   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:26.654923   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:30:26.655115   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 17:30:26.655126   32725 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 17:30:26.768026   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:30:26.768087   32725 main.go:141] libmachine: Detecting the provisioner...
	I0717 17:30:26.768100   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:26.770745   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.771069   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:26.771094   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.771291   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:26.771492   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:26.771685   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:26.771830   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:26.772002   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:30:26.772190   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 17:30:26.772204   32725 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 17:30:26.881251   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 17:30:26.881312   32725 main.go:141] libmachine: found compatible host: buildroot
	I0717 17:30:26.881322   32725 main.go:141] libmachine: Provisioning with buildroot...
	I0717 17:30:26.881332   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetMachineName
	I0717 17:30:26.881567   32725 buildroot.go:166] provisioning hostname "ha-174628-m02"
	I0717 17:30:26.881593   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetMachineName
	I0717 17:30:26.881754   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:26.884232   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.884579   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:26.884606   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.884707   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:26.884877   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:26.885132   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:26.885331   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:26.885482   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:30:26.885643   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 17:30:26.885653   32725 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174628-m02 && echo "ha-174628-m02" | sudo tee /etc/hostname
	I0717 17:30:27.009565   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174628-m02
	
	I0717 17:30:27.009597   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.012536   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.012896   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.012917   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.013166   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.013342   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.013521   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.013661   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.013798   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:30:27.013959   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 17:30:27.013981   32725 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174628-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174628-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174628-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 17:30:27.134434   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:30:27.134457   32725 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 17:30:27.134476   32725 buildroot.go:174] setting up certificates
	I0717 17:30:27.134488   32725 provision.go:84] configureAuth start
	I0717 17:30:27.134499   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetMachineName
	I0717 17:30:27.134767   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:30:27.137175   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.137604   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.137630   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.137738   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.139589   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.139907   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.139931   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.140043   32725 provision.go:143] copyHostCerts
	I0717 17:30:27.140079   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:30:27.140118   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 17:30:27.140128   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:30:27.140208   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 17:30:27.140307   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:30:27.140347   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 17:30:27.140357   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:30:27.140394   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 17:30:27.140470   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:30:27.140490   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 17:30:27.140496   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:30:27.140531   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 17:30:27.140613   32725 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.ha-174628-m02 san=[127.0.0.1 192.168.39.97 ha-174628-m02 localhost minikube]
	I0717 17:30:27.270219   32725 provision.go:177] copyRemoteCerts
	I0717 17:30:27.270272   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 17:30:27.270295   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.272729   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.273036   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.273057   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.273287   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.273469   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.273636   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.273752   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	I0717 17:30:27.362923   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 17:30:27.363014   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 17:30:27.388021   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 17:30:27.388092   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 17:30:27.409698   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 17:30:27.409775   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 17:30:27.431771   32725 provision.go:87] duration metric: took 297.270251ms to configureAuth
	I0717 17:30:27.431801   32725 buildroot.go:189] setting minikube options for container-runtime
	I0717 17:30:27.431978   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:30:27.432045   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.434814   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.435235   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.435262   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.435458   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.435646   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.435843   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.435965   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.436086   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:30:27.436267   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 17:30:27.436283   32725 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 17:30:27.697507   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 17:30:27.697534   32725 main.go:141] libmachine: Checking connection to Docker...
	I0717 17:30:27.697542   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetURL
	I0717 17:30:27.698674   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Using libvirt version 6000000
	I0717 17:30:27.700401   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.700707   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.700744   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.700814   32725 main.go:141] libmachine: Docker is up and running!
	I0717 17:30:27.700828   32725 main.go:141] libmachine: Reticulating splines...
	I0717 17:30:27.700836   32725 client.go:171] duration metric: took 20.748326231s to LocalClient.Create
	I0717 17:30:27.700860   32725 start.go:167] duration metric: took 20.748380298s to libmachine.API.Create "ha-174628"
	I0717 17:30:27.700871   32725 start.go:293] postStartSetup for "ha-174628-m02" (driver="kvm2")
	I0717 17:30:27.700884   32725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 17:30:27.700900   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:27.701122   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 17:30:27.701143   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.702855   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.703123   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.703149   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.703262   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.703445   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.703589   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.703743   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	I0717 17:30:27.787294   32725 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 17:30:27.791263   32725 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 17:30:27.791287   32725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 17:30:27.791350   32725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 17:30:27.791443   32725 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 17:30:27.791454   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /etc/ssl/certs/215772.pem
	I0717 17:30:27.791556   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 17:30:27.800213   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:30:27.822076   32725 start.go:296] duration metric: took 121.191213ms for postStartSetup
	I0717 17:30:27.822129   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetConfigRaw
	I0717 17:30:27.822728   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:30:27.825244   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.825700   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.825728   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.825990   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:30:27.826187   32725 start.go:128] duration metric: took 20.891909418s to createHost
	I0717 17:30:27.826210   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.828484   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.828854   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.828880   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.829016   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.829195   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.829361   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.829464   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.829627   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:30:27.829819   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 17:30:27.829831   32725 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 17:30:27.937280   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237427.910092037
	
	I0717 17:30:27.937304   32725 fix.go:216] guest clock: 1721237427.910092037
	I0717 17:30:27.937311   32725 fix.go:229] Guest: 2024-07-17 17:30:27.910092037 +0000 UTC Remote: 2024-07-17 17:30:27.826199284 +0000 UTC m=+71.533125181 (delta=83.892753ms)
	I0717 17:30:27.937325   32725 fix.go:200] guest clock delta is within tolerance: 83.892753ms
	I0717 17:30:27.937330   32725 start.go:83] releasing machines lock for "ha-174628-m02", held for 21.003156575s
	I0717 17:30:27.937350   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:27.937657   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:30:27.940144   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.940475   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.940501   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.942679   32725 out.go:177] * Found network options:
	I0717 17:30:27.944029   32725 out.go:177]   - NO_PROXY=192.168.39.100
	W0717 17:30:27.945336   32725 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 17:30:27.945371   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:27.945891   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:27.946053   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:27.946121   32725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 17:30:27.946150   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	W0717 17:30:27.946221   32725 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 17:30:27.946296   32725 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 17:30:27.946318   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.948894   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.949231   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.949259   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.949280   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.949462   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.949629   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.949711   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.949733   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.949796   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.949877   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.949964   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	I0717 17:30:27.950032   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.950156   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.950283   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	I0717 17:30:28.183059   32725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 17:30:28.188754   32725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 17:30:28.188826   32725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 17:30:28.203181   32725 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 17:30:28.203205   32725 start.go:495] detecting cgroup driver to use...
	I0717 17:30:28.203275   32725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 17:30:28.218372   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 17:30:28.231086   32725 docker.go:217] disabling cri-docker service (if available) ...
	I0717 17:30:28.231151   32725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 17:30:28.243630   32725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 17:30:28.256287   32725 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 17:30:28.368408   32725 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 17:30:28.504467   32725 docker.go:233] disabling docker service ...
	I0717 17:30:28.504549   32725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 17:30:28.517898   32725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 17:30:28.529703   32725 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 17:30:28.668306   32725 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 17:30:28.772095   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 17:30:28.785276   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 17:30:28.802030   32725 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 17:30:28.802113   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.811581   32725 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 17:30:28.811658   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.821646   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.830882   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.840164   32725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 17:30:28.849652   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.858642   32725 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.875466   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.884844   32725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 17:30:28.893426   32725 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 17:30:28.893476   32725 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 17:30:28.905941   32725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 17:30:28.914900   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:30:29.028588   32725 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 17:30:29.158553   32725 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 17:30:29.158626   32725 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 17:30:29.163519   32725 start.go:563] Will wait 60s for crictl version
	I0717 17:30:29.163631   32725 ssh_runner.go:195] Run: which crictl
	I0717 17:30:29.167499   32725 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 17:30:29.208428   32725 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 17:30:29.208514   32725 ssh_runner.go:195] Run: crio --version
	I0717 17:30:29.237255   32725 ssh_runner.go:195] Run: crio --version
	I0717 17:30:29.267886   32725 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 17:30:29.269253   32725 out.go:177]   - env NO_PROXY=192.168.39.100
	I0717 17:30:29.270333   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:30:29.273419   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:29.273804   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:29.273833   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:29.274006   32725 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 17:30:29.277914   32725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:30:29.290713   32725 mustload.go:65] Loading cluster: ha-174628
	I0717 17:30:29.290872   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:30:29.291103   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:29.291129   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:29.305889   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I0717 17:30:29.306305   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:29.306805   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:29.306827   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:29.307152   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:29.307406   32725 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:30:29.308984   32725 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:30:29.309275   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:29.309322   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:29.324357   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41527
	I0717 17:30:29.324833   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:29.325308   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:29.325329   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:29.325634   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:29.325821   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:30:29.325980   32725 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628 for IP: 192.168.39.97
	I0717 17:30:29.325992   32725 certs.go:194] generating shared ca certs ...
	I0717 17:30:29.326011   32725 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:30:29.326139   32725 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 17:30:29.326189   32725 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 17:30:29.326202   32725 certs.go:256] generating profile certs ...
	I0717 17:30:29.326292   32725 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key
	I0717 17:30:29.326327   32725 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.79966bc2
	I0717 17:30:29.326349   32725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.79966bc2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.97 192.168.39.254]
	I0717 17:30:29.599890   32725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.79966bc2 ...
	I0717 17:30:29.599919   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.79966bc2: {Name:mk4aa20f793a6c7a0fef2d3ef9b599c41575e148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:30:29.600096   32725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.79966bc2 ...
	I0717 17:30:29.600112   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.79966bc2: {Name:mk73dd4d067123d7bffcad1ee9aecc3a37f46efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:30:29.600206   32725 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.79966bc2 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt
	I0717 17:30:29.600356   32725 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.79966bc2 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key
	I0717 17:30:29.600517   32725 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key
	I0717 17:30:29.600533   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 17:30:29.600550   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 17:30:29.600565   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 17:30:29.600590   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 17:30:29.600607   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 17:30:29.600622   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 17:30:29.600641   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 17:30:29.600656   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 17:30:29.600716   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 17:30:29.600755   32725 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 17:30:29.600768   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 17:30:29.600803   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 17:30:29.600835   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 17:30:29.600866   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 17:30:29.600920   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:30:29.600970   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem -> /usr/share/ca-certificates/21577.pem
	I0717 17:30:29.600992   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /usr/share/ca-certificates/215772.pem
	I0717 17:30:29.601010   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:30:29.601048   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:30:29.603806   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:30:29.604236   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:30:29.604263   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:30:29.604392   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:30:29.604603   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:30:29.604751   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:30:29.604869   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:30:29.673312   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 17:30:29.678206   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 17:30:29.688082   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 17:30:29.691889   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0717 17:30:29.701045   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 17:30:29.704724   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 17:30:29.714148   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 17:30:29.718640   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0717 17:30:29.728611   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 17:30:29.732246   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 17:30:29.742446   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 17:30:29.746417   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 17:30:29.756087   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 17:30:29.780885   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 17:30:29.803687   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 17:30:29.826532   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 17:30:29.849270   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 17:30:29.871848   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 17:30:29.895591   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 17:30:29.918963   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 17:30:29.940294   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 17:30:29.961794   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 17:30:29.984348   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 17:30:30.006158   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 17:30:30.021000   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0717 17:30:30.036035   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 17:30:30.051290   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0717 17:30:30.066585   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 17:30:30.082040   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 17:30:30.097282   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 17:30:30.112729   32725 ssh_runner.go:195] Run: openssl version
	I0717 17:30:30.118171   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 17:30:30.128383   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 17:30:30.132271   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 17:30:30.132326   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 17:30:30.138026   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 17:30:30.148061   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 17:30:30.158253   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 17:30:30.162047   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 17:30:30.162099   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 17:30:30.167040   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 17:30:30.177143   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 17:30:30.186721   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:30:30.190631   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:30:30.190677   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:30:30.195729   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 17:30:30.205315   32725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 17:30:30.209014   32725 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 17:30:30.209063   32725 kubeadm.go:934] updating node {m02 192.168.39.97 8443 v1.30.2 crio true true} ...
	I0717 17:30:30.209166   32725 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174628-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 17:30:30.209194   32725 kube-vip.go:115] generating kube-vip config ...
	I0717 17:30:30.209227   32725 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 17:30:30.224818   32725 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 17:30:30.224879   32725 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 17:30:30.224922   32725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 17:30:30.233315   32725 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 17:30:30.233361   32725 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 17:30:30.241815   32725 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 17:30:30.241839   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 17:30:30.241903   32725 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0717 17:30:30.241928   32725 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0717 17:30:30.241906   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 17:30:30.245844   32725 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 17:30:30.245870   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 17:31:08.834001   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 17:31:08.834091   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 17:31:08.839777   32725 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 17:31:08.839819   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 17:31:43.865058   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:31:43.881611   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 17:31:43.881700   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 17:31:43.885823   32725 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 17:31:43.885858   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 17:31:44.227610   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 17:31:44.236593   32725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0717 17:31:44.251937   32725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 17:31:44.266902   32725 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 17:31:44.281671   32725 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 17:31:44.285240   32725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:31:44.296055   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:31:44.408090   32725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:31:44.423851   32725 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:31:44.424308   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:31:44.424362   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:31:44.439233   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46041
	I0717 17:31:44.439686   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:31:44.440169   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:31:44.440193   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:31:44.440606   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:31:44.440811   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:31:44.440988   32725 start.go:317] joinCluster: &{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:31:44.441111   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 17:31:44.441132   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:31:44.444032   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:31:44.444553   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:31:44.444575   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:31:44.444729   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:31:44.444908   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:31:44.445084   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:31:44.445221   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:31:44.599900   32725 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:31:44.599950   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token knhmpz.6rn9meqs7468hbpw --discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174628-m02 --control-plane --apiserver-advertise-address=192.168.39.97 --apiserver-bind-port=8443"
	I0717 17:32:06.001020   32725 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token knhmpz.6rn9meqs7468hbpw --discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174628-m02 --control-plane --apiserver-advertise-address=192.168.39.97 --apiserver-bind-port=8443": (21.401040933s)
	I0717 17:32:06.001063   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 17:32:06.440073   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174628-m02 minikube.k8s.io/updated_at=2024_07_17T17_32_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=ha-174628 minikube.k8s.io/primary=false
	I0717 17:32:06.560695   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174628-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 17:32:06.653563   32725 start.go:319] duration metric: took 22.212571193s to joinCluster
	I0717 17:32:06.653658   32725 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:32:06.653958   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:32:06.655049   32725 out.go:177] * Verifying Kubernetes components...
	I0717 17:32:06.656342   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:32:06.876327   32725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:32:06.918990   32725 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:32:06.919376   32725 kapi.go:59] client config for ha-174628: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.crt", KeyFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key", CAFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 17:32:06.919482   32725 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.100:8443
	I0717 17:32:06.919761   32725 node_ready.go:35] waiting up to 6m0s for node "ha-174628-m02" to be "Ready" ...
	I0717 17:32:06.919865   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:06.919876   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:06.919887   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:06.919897   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:06.930531   32725 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 17:32:07.420258   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:07.420280   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:07.420287   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:07.420291   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:07.425735   32725 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 17:32:07.920436   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:07.920460   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:07.920471   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:07.920476   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:07.927951   32725 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 17:32:08.419963   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:08.419986   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:08.419997   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:08.420001   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:08.423661   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:08.920790   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:08.920815   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:08.920826   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:08.920831   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:08.924676   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:08.925156   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:09.420493   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:09.420516   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:09.420524   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:09.420529   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:09.423670   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:09.920007   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:09.920027   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:09.920038   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:09.920043   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:10.027683   32725 round_trippers.go:574] Response Status: 200 OK in 107 milliseconds
	I0717 17:32:10.420452   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:10.420482   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:10.420493   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:10.420499   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:10.424082   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:10.920168   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:10.920188   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:10.920196   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:10.920202   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:10.923032   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:11.420207   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:11.420234   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:11.420244   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:11.420249   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:11.423870   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:11.424327   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:11.920146   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:11.920169   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:11.920179   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:11.920185   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:11.923481   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:12.420855   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:12.420876   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:12.420883   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:12.420889   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:12.423889   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:12.920626   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:12.920645   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:12.920657   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:12.920661   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:12.926695   32725 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 17:32:13.420156   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:13.420182   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:13.420190   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:13.420194   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:13.423208   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:13.920311   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:13.920337   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:13.920346   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:13.920351   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:13.923041   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:13.923545   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:14.420903   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:14.420926   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:14.420939   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:14.420955   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:14.424316   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:14.920976   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:14.921004   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:14.921012   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:14.921016   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:14.924059   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:15.420568   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:15.420591   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:15.420602   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:15.420608   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:15.423888   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:15.920083   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:15.920110   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:15.920119   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:15.920124   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:15.923607   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:15.924223   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:16.420345   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:16.420373   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:16.420384   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:16.420387   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:16.423368   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:16.920615   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:16.920635   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:16.920643   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:16.920646   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:16.923724   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:17.420234   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:17.420257   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:17.420268   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:17.420273   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:17.423467   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:17.920039   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:17.920061   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:17.920070   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:17.920079   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:17.923326   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:18.420955   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:18.420982   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:18.420991   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:18.420994   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:18.424015   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:18.424435   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:18.920864   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:18.920886   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:18.920897   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:18.920901   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:18.924265   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:19.420126   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:19.420147   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:19.420155   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:19.420160   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:19.423319   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:19.920559   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:19.920584   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:19.920593   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:19.920598   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:19.924134   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:20.419960   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:20.419980   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:20.419988   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:20.419992   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:20.423165   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:20.919934   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:20.919954   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:20.919962   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:20.919966   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:20.923306   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:20.923774   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:21.420249   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:21.420273   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:21.420281   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:21.420286   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:21.423157   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:21.920673   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:21.920694   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:21.920702   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:21.920706   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:21.924190   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:22.420227   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:22.420249   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:22.420257   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:22.420261   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:22.423629   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:22.920462   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:22.920495   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:22.920508   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:22.920516   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:22.923703   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:22.924251   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:23.420726   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:23.420749   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.420760   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.420764   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.424452   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:23.920232   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:23.920254   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.920262   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.920266   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.923497   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:23.923952   32725 node_ready.go:49] node "ha-174628-m02" has status "Ready":"True"
	I0717 17:32:23.923968   32725 node_ready.go:38] duration metric: took 17.004183592s for node "ha-174628-m02" to be "Ready" ...
	I0717 17:32:23.923985   32725 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 17:32:23.924037   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:32:23.924048   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.924055   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.924058   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.928855   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:32:23.934963   32725 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ljjl7" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.935053   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ljjl7
	I0717 17:32:23.935067   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.935077   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.935084   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.937579   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.938300   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:23.938317   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.938328   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.938334   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.940511   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.941063   32725 pod_ready.go:92] pod "coredns-7db6d8ff4d-ljjl7" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:23.941082   32725 pod_ready.go:81] duration metric: took 6.095417ms for pod "coredns-7db6d8ff4d-ljjl7" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.941093   32725 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nb567" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.941149   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nb567
	I0717 17:32:23.941160   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.941170   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.941175   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.943542   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.944209   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:23.944299   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.944319   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.944331   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.946492   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.946939   32725 pod_ready.go:92] pod "coredns-7db6d8ff4d-nb567" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:23.946953   32725 pod_ready.go:81] duration metric: took 5.85384ms for pod "coredns-7db6d8ff4d-nb567" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.946960   32725 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.946998   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174628
	I0717 17:32:23.947005   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.947013   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.947021   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.948985   32725 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 17:32:23.949586   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:23.949602   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.949609   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.949613   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.951824   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.952412   32725 pod_ready.go:92] pod "etcd-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:23.952433   32725 pod_ready.go:81] duration metric: took 5.466483ms for pod "etcd-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.952444   32725 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.952497   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174628-m02
	I0717 17:32:23.952505   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.952512   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.952517   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.954704   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.955096   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:23.955107   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.955114   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.955118   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.957222   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.957579   32725 pod_ready.go:92] pod "etcd-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:23.957593   32725 pod_ready.go:81] duration metric: took 5.142703ms for pod "etcd-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.957605   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:24.121001   32725 request.go:629] Waited for 163.340264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628
	I0717 17:32:24.121098   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628
	I0717 17:32:24.121109   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:24.121121   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:24.121132   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:24.124584   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:24.320415   32725 request.go:629] Waited for 195.279708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:24.320516   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:24.320531   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:24.320544   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:24.320554   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:24.323699   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:24.324116   32725 pod_ready.go:92] pod "kube-apiserver-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:24.324134   32725 pod_ready.go:81] duration metric: took 366.520996ms for pod "kube-apiserver-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:24.324145   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:24.520636   32725 request.go:629] Waited for 196.429854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628-m02
	I0717 17:32:24.520721   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628-m02
	I0717 17:32:24.520733   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:24.520744   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:24.520752   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:24.523620   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:24.720686   32725 request.go:629] Waited for 196.346873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:24.720770   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:24.720781   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:24.720792   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:24.720801   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:24.724330   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:24.724759   32725 pod_ready.go:92] pod "kube-apiserver-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:24.724778   32725 pod_ready.go:81] duration metric: took 400.626087ms for pod "kube-apiserver-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:24.724790   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:24.921008   32725 request.go:629] Waited for 196.113008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628
	I0717 17:32:24.921084   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628
	I0717 17:32:24.921096   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:24.921107   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:24.921114   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:24.924239   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:25.121267   32725 request.go:629] Waited for 196.334079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:25.121368   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:25.121379   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:25.121389   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:25.121397   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:25.124591   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:25.125192   32725 pod_ready.go:92] pod "kube-controller-manager-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:25.125212   32725 pod_ready.go:81] duration metric: took 400.414336ms for pod "kube-controller-manager-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:25.125224   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:25.321146   32725 request.go:629] Waited for 195.85089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628-m02
	I0717 17:32:25.321231   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628-m02
	I0717 17:32:25.321241   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:25.321253   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:25.321261   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:25.324440   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:25.520408   32725 request.go:629] Waited for 195.280831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:25.520479   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:25.520485   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:25.520492   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:25.520496   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:25.523976   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:25.524695   32725 pod_ready.go:92] pod "kube-controller-manager-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:25.524716   32725 pod_ready.go:81] duration metric: took 399.480457ms for pod "kube-controller-manager-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:25.524727   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7lchn" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:25.720682   32725 request.go:629] Waited for 195.864209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lchn
	I0717 17:32:25.720761   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lchn
	I0717 17:32:25.720773   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:25.720784   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:25.720796   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:25.723983   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:25.921053   32725 request.go:629] Waited for 196.406095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:25.921137   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:25.921149   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:25.921158   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:25.921165   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:25.925851   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:32:25.926451   32725 pod_ready.go:92] pod "kube-proxy-7lchn" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:25.926473   32725 pod_ready.go:81] duration metric: took 401.739165ms for pod "kube-proxy-7lchn" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:25.926486   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fqf9q" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:26.120517   32725 request.go:629] Waited for 193.963518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fqf9q
	I0717 17:32:26.120594   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fqf9q
	I0717 17:32:26.120601   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:26.120614   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:26.120619   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:26.123941   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:26.320843   32725 request.go:629] Waited for 195.959286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:26.320896   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:26.320902   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:26.320913   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:26.320920   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:26.323988   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:26.324768   32725 pod_ready.go:92] pod "kube-proxy-fqf9q" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:26.324787   32725 pod_ready.go:81] duration metric: took 398.293955ms for pod "kube-proxy-fqf9q" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:26.324799   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:26.521285   32725 request.go:629] Waited for 196.402688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628
	I0717 17:32:26.521333   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628
	I0717 17:32:26.521337   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:26.521345   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:26.521348   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:26.524367   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:26.720227   32725 request.go:629] Waited for 195.278906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:26.720311   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:26.720318   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:26.720332   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:26.720338   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:26.723719   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:26.724241   32725 pod_ready.go:92] pod "kube-scheduler-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:26.724260   32725 pod_ready.go:81] duration metric: took 399.453568ms for pod "kube-scheduler-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:26.724272   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:26.920265   32725 request.go:629] Waited for 195.912827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628-m02
	I0717 17:32:26.920345   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628-m02
	I0717 17:32:26.920352   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:26.920362   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:26.920366   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:26.924161   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:27.120304   32725 request.go:629] Waited for 195.349145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:27.120370   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:27.120375   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:27.120383   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:27.120387   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:27.123310   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:27.123753   32725 pod_ready.go:92] pod "kube-scheduler-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:27.123768   32725 pod_ready.go:81] duration metric: took 399.488698ms for pod "kube-scheduler-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:27.123778   32725 pod_ready.go:38] duration metric: took 3.199783373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 17:32:27.123802   32725 api_server.go:52] waiting for apiserver process to appear ...
	I0717 17:32:27.123879   32725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:32:27.139407   32725 api_server.go:72] duration metric: took 20.485712083s to wait for apiserver process to appear ...
	I0717 17:32:27.139433   32725 api_server.go:88] waiting for apiserver healthz status ...
	I0717 17:32:27.139457   32725 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0717 17:32:27.143893   32725 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0717 17:32:27.143959   32725 round_trippers.go:463] GET https://192.168.39.100:8443/version
	I0717 17:32:27.143966   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:27.143974   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:27.143978   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:27.144741   32725 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 17:32:27.144832   32725 api_server.go:141] control plane version: v1.30.2
	I0717 17:32:27.144847   32725 api_server.go:131] duration metric: took 5.408081ms to wait for apiserver health ...
	I0717 17:32:27.144853   32725 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 17:32:27.321298   32725 request.go:629] Waited for 176.369505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:32:27.321363   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:32:27.321369   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:27.321376   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:27.321381   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:27.326151   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:32:27.330339   32725 system_pods.go:59] 17 kube-system pods found
	I0717 17:32:27.330383   32725 system_pods.go:61] "coredns-7db6d8ff4d-ljjl7" [2c4857a1-6ccd-4122-80b5-f5bcfd2e307f] Running
	I0717 17:32:27.330389   32725 system_pods.go:61] "coredns-7db6d8ff4d-nb567" [1739ac64-be05-4438-9a8f-a0d2821a1650] Running
	I0717 17:32:27.330393   32725 system_pods.go:61] "etcd-ha-174628" [005dbd48-14a2-458a-a8b3-252696a4ce85] Running
	I0717 17:32:27.330396   32725 system_pods.go:61] "etcd-ha-174628-m02" [6598f8f5-41df-46a9-bb82-fcf2ad182e60] Running
	I0717 17:32:27.330399   32725 system_pods.go:61] "kindnet-79txz" [8c09c315-591a-4835-a433-f3bc3283f305] Running
	I0717 17:32:27.330402   32725 system_pods.go:61] "kindnet-k6jnp" [9bca93ed-aca5-4540-990c-d9e6209d12d0] Running
	I0717 17:32:27.330405   32725 system_pods.go:61] "kube-apiserver-ha-174628" [3f169484-b9b1-4be6-abec-2309c0bfecba] Running
	I0717 17:32:27.330408   32725 system_pods.go:61] "kube-apiserver-ha-174628-m02" [316d349c-f099-45c3-a9ab-34fbcaeaae02] Running
	I0717 17:32:27.330410   32725 system_pods.go:61] "kube-controller-manager-ha-174628" [ea259b8d-9fcb-4fb1-9e32-75d6a47e44ed] Running
	I0717 17:32:27.330415   32725 system_pods.go:61] "kube-controller-manager-ha-174628-m02" [0374a405-7fb7-4367-997e-0ac06d57338d] Running
	I0717 17:32:27.330417   32725 system_pods.go:61] "kube-proxy-7lchn" [a01b695f-ec8b-4727-9c82-4251aa34d682] Running
	I0717 17:32:27.330421   32725 system_pods.go:61] "kube-proxy-fqf9q" [f74d57a9-38a2-464d-991f-fc8905fdbe3f] Running
	I0717 17:32:27.330424   32725 system_pods.go:61] "kube-scheduler-ha-174628" [1776b347-cc13-44da-a60a-199bdb85d2c2] Running
	I0717 17:32:27.330426   32725 system_pods.go:61] "kube-scheduler-ha-174628-m02" [ce3683eb-351e-40d4-a704-13dfddc2bdea] Running
	I0717 17:32:27.330429   32725 system_pods.go:61] "kube-vip-ha-174628" [b2d62768-e68e-4ce3-ad84-31ddac00688e] Running
	I0717 17:32:27.330431   32725 system_pods.go:61] "kube-vip-ha-174628-m02" [a6656a18-6176-4291-a094-e4b942e9ba1c] Running
	I0717 17:32:27.330434   32725 system_pods.go:61] "storage-provisioner" [8c0601bb-36f6-434d-8e9d-1e326bf682f5] Running
	I0717 17:32:27.330439   32725 system_pods.go:74] duration metric: took 185.581054ms to wait for pod list to return data ...
	I0717 17:32:27.330446   32725 default_sa.go:34] waiting for default service account to be created ...
	I0717 17:32:27.520831   32725 request.go:629] Waited for 190.319635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0717 17:32:27.520881   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0717 17:32:27.520891   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:27.520898   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:27.520903   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:27.524042   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:27.524241   32725 default_sa.go:45] found service account: "default"
	I0717 17:32:27.524258   32725 default_sa.go:55] duration metric: took 193.805436ms for default service account to be created ...
	I0717 17:32:27.524268   32725 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 17:32:27.720750   32725 request.go:629] Waited for 196.407045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:32:27.720811   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:32:27.720816   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:27.720824   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:27.720828   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:27.726452   32725 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 17:32:27.730318   32725 system_pods.go:86] 17 kube-system pods found
	I0717 17:32:27.730340   32725 system_pods.go:89] "coredns-7db6d8ff4d-ljjl7" [2c4857a1-6ccd-4122-80b5-f5bcfd2e307f] Running
	I0717 17:32:27.730346   32725 system_pods.go:89] "coredns-7db6d8ff4d-nb567" [1739ac64-be05-4438-9a8f-a0d2821a1650] Running
	I0717 17:32:27.730350   32725 system_pods.go:89] "etcd-ha-174628" [005dbd48-14a2-458a-a8b3-252696a4ce85] Running
	I0717 17:32:27.730354   32725 system_pods.go:89] "etcd-ha-174628-m02" [6598f8f5-41df-46a9-bb82-fcf2ad182e60] Running
	I0717 17:32:27.730358   32725 system_pods.go:89] "kindnet-79txz" [8c09c315-591a-4835-a433-f3bc3283f305] Running
	I0717 17:32:27.730362   32725 system_pods.go:89] "kindnet-k6jnp" [9bca93ed-aca5-4540-990c-d9e6209d12d0] Running
	I0717 17:32:27.730366   32725 system_pods.go:89] "kube-apiserver-ha-174628" [3f169484-b9b1-4be6-abec-2309c0bfecba] Running
	I0717 17:32:27.730369   32725 system_pods.go:89] "kube-apiserver-ha-174628-m02" [316d349c-f099-45c3-a9ab-34fbcaeaae02] Running
	I0717 17:32:27.730373   32725 system_pods.go:89] "kube-controller-manager-ha-174628" [ea259b8d-9fcb-4fb1-9e32-75d6a47e44ed] Running
	I0717 17:32:27.730377   32725 system_pods.go:89] "kube-controller-manager-ha-174628-m02" [0374a405-7fb7-4367-997e-0ac06d57338d] Running
	I0717 17:32:27.730381   32725 system_pods.go:89] "kube-proxy-7lchn" [a01b695f-ec8b-4727-9c82-4251aa34d682] Running
	I0717 17:32:27.730384   32725 system_pods.go:89] "kube-proxy-fqf9q" [f74d57a9-38a2-464d-991f-fc8905fdbe3f] Running
	I0717 17:32:27.730388   32725 system_pods.go:89] "kube-scheduler-ha-174628" [1776b347-cc13-44da-a60a-199bdb85d2c2] Running
	I0717 17:32:27.730392   32725 system_pods.go:89] "kube-scheduler-ha-174628-m02" [ce3683eb-351e-40d4-a704-13dfddc2bdea] Running
	I0717 17:32:27.730396   32725 system_pods.go:89] "kube-vip-ha-174628" [b2d62768-e68e-4ce3-ad84-31ddac00688e] Running
	I0717 17:32:27.730399   32725 system_pods.go:89] "kube-vip-ha-174628-m02" [a6656a18-6176-4291-a094-e4b942e9ba1c] Running
	I0717 17:32:27.730402   32725 system_pods.go:89] "storage-provisioner" [8c0601bb-36f6-434d-8e9d-1e326bf682f5] Running
	I0717 17:32:27.730408   32725 system_pods.go:126] duration metric: took 206.135707ms to wait for k8s-apps to be running ...
	I0717 17:32:27.730418   32725 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 17:32:27.730461   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:32:27.745769   32725 system_svc.go:56] duration metric: took 15.343153ms WaitForService to wait for kubelet
	I0717 17:32:27.745797   32725 kubeadm.go:582] duration metric: took 21.092108876s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 17:32:27.745825   32725 node_conditions.go:102] verifying NodePressure condition ...
	I0717 17:32:27.921231   32725 request.go:629] Waited for 175.344959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes
	I0717 17:32:27.921292   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes
	I0717 17:32:27.921298   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:27.921305   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:27.921311   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:27.924530   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:27.925278   32725 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 17:32:27.925314   32725 node_conditions.go:123] node cpu capacity is 2
	I0717 17:32:27.925333   32725 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 17:32:27.925338   32725 node_conditions.go:123] node cpu capacity is 2
	I0717 17:32:27.925345   32725 node_conditions.go:105] duration metric: took 179.514948ms to run NodePressure ...
	I0717 17:32:27.925360   32725 start.go:241] waiting for startup goroutines ...
	I0717 17:32:27.925384   32725 start.go:255] writing updated cluster config ...
	I0717 17:32:27.927397   32725 out.go:177] 
	I0717 17:32:27.929043   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:32:27.929126   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:32:27.930795   32725 out.go:177] * Starting "ha-174628-m03" control-plane node in "ha-174628" cluster
	I0717 17:32:27.931898   32725 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:32:27.931916   32725 cache.go:56] Caching tarball of preloaded images
	I0717 17:32:27.932005   32725 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 17:32:27.932018   32725 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 17:32:27.932087   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:32:27.932239   32725 start.go:360] acquireMachinesLock for ha-174628-m03: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 17:32:27.932277   32725 start.go:364] duration metric: took 18.412µs to acquireMachinesLock for "ha-174628-m03"
	I0717 17:32:27.932298   32725 start.go:93] Provisioning new machine with config: &{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:32:27.932401   32725 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0717 17:32:27.933866   32725 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 17:32:27.933951   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:32:27.933983   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:32:27.949065   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37567
	I0717 17:32:27.949537   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:32:27.950074   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:32:27.950098   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:32:27.950419   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:32:27.950581   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetMachineName
	I0717 17:32:27.950730   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:27.950865   32725 start.go:159] libmachine.API.Create for "ha-174628" (driver="kvm2")
	I0717 17:32:27.950893   32725 client.go:168] LocalClient.Create starting
	I0717 17:32:27.950939   32725 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 17:32:27.950976   32725 main.go:141] libmachine: Decoding PEM data...
	I0717 17:32:27.950996   32725 main.go:141] libmachine: Parsing certificate...
	I0717 17:32:27.951057   32725 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 17:32:27.951083   32725 main.go:141] libmachine: Decoding PEM data...
	I0717 17:32:27.951099   32725 main.go:141] libmachine: Parsing certificate...
	I0717 17:32:27.951131   32725 main.go:141] libmachine: Running pre-create checks...
	I0717 17:32:27.951146   32725 main.go:141] libmachine: (ha-174628-m03) Calling .PreCreateCheck
	I0717 17:32:27.951311   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetConfigRaw
	I0717 17:32:27.951698   32725 main.go:141] libmachine: Creating machine...
	I0717 17:32:27.951713   32725 main.go:141] libmachine: (ha-174628-m03) Calling .Create
	I0717 17:32:27.951881   32725 main.go:141] libmachine: (ha-174628-m03) Creating KVM machine...
	I0717 17:32:27.953177   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found existing default KVM network
	I0717 17:32:27.953293   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found existing private KVM network mk-ha-174628
	I0717 17:32:27.953451   32725 main.go:141] libmachine: (ha-174628-m03) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03 ...
	I0717 17:32:27.953475   32725 main.go:141] libmachine: (ha-174628-m03) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 17:32:27.953531   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:27.953420   33749 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:32:27.953661   32725 main.go:141] libmachine: (ha-174628-m03) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 17:32:28.170503   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:28.170356   33749 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa...
	I0717 17:32:28.227484   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:28.227377   33749 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/ha-174628-m03.rawdisk...
	I0717 17:32:28.227511   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Writing magic tar header
	I0717 17:32:28.227520   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Writing SSH key tar header
	I0717 17:32:28.227528   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:28.227496   33749 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03 ...
	I0717 17:32:28.227683   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03
	I0717 17:32:28.227715   32725 main.go:141] libmachine: (ha-174628-m03) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03 (perms=drwx------)
	I0717 17:32:28.227727   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 17:32:28.227740   32725 main.go:141] libmachine: (ha-174628-m03) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 17:32:28.227756   32725 main.go:141] libmachine: (ha-174628-m03) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 17:32:28.227767   32725 main.go:141] libmachine: (ha-174628-m03) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 17:32:28.227780   32725 main.go:141] libmachine: (ha-174628-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 17:32:28.227792   32725 main.go:141] libmachine: (ha-174628-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 17:32:28.227809   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:32:28.227820   32725 main.go:141] libmachine: (ha-174628-m03) Creating domain...
	I0717 17:32:28.227838   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 17:32:28.227850   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 17:32:28.227865   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home/jenkins
	I0717 17:32:28.227874   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home
	I0717 17:32:28.227883   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Skipping /home - not owner
	I0717 17:32:28.228822   32725 main.go:141] libmachine: (ha-174628-m03) define libvirt domain using xml: 
	I0717 17:32:28.228840   32725 main.go:141] libmachine: (ha-174628-m03) <domain type='kvm'>
	I0717 17:32:28.228851   32725 main.go:141] libmachine: (ha-174628-m03)   <name>ha-174628-m03</name>
	I0717 17:32:28.228864   32725 main.go:141] libmachine: (ha-174628-m03)   <memory unit='MiB'>2200</memory>
	I0717 17:32:28.228877   32725 main.go:141] libmachine: (ha-174628-m03)   <vcpu>2</vcpu>
	I0717 17:32:28.228890   32725 main.go:141] libmachine: (ha-174628-m03)   <features>
	I0717 17:32:28.228898   32725 main.go:141] libmachine: (ha-174628-m03)     <acpi/>
	I0717 17:32:28.228907   32725 main.go:141] libmachine: (ha-174628-m03)     <apic/>
	I0717 17:32:28.228918   32725 main.go:141] libmachine: (ha-174628-m03)     <pae/>
	I0717 17:32:28.228926   32725 main.go:141] libmachine: (ha-174628-m03)     
	I0717 17:32:28.228937   32725 main.go:141] libmachine: (ha-174628-m03)   </features>
	I0717 17:32:28.228961   32725 main.go:141] libmachine: (ha-174628-m03)   <cpu mode='host-passthrough'>
	I0717 17:32:28.228974   32725 main.go:141] libmachine: (ha-174628-m03)   
	I0717 17:32:28.228985   32725 main.go:141] libmachine: (ha-174628-m03)   </cpu>
	I0717 17:32:28.228996   32725 main.go:141] libmachine: (ha-174628-m03)   <os>
	I0717 17:32:28.229007   32725 main.go:141] libmachine: (ha-174628-m03)     <type>hvm</type>
	I0717 17:32:28.229018   32725 main.go:141] libmachine: (ha-174628-m03)     <boot dev='cdrom'/>
	I0717 17:32:28.229036   32725 main.go:141] libmachine: (ha-174628-m03)     <boot dev='hd'/>
	I0717 17:32:28.229048   32725 main.go:141] libmachine: (ha-174628-m03)     <bootmenu enable='no'/>
	I0717 17:32:28.229066   32725 main.go:141] libmachine: (ha-174628-m03)   </os>
	I0717 17:32:28.229075   32725 main.go:141] libmachine: (ha-174628-m03)   <devices>
	I0717 17:32:28.229082   32725 main.go:141] libmachine: (ha-174628-m03)     <disk type='file' device='cdrom'>
	I0717 17:32:28.229097   32725 main.go:141] libmachine: (ha-174628-m03)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/boot2docker.iso'/>
	I0717 17:32:28.229108   32725 main.go:141] libmachine: (ha-174628-m03)       <target dev='hdc' bus='scsi'/>
	I0717 17:32:28.229119   32725 main.go:141] libmachine: (ha-174628-m03)       <readonly/>
	I0717 17:32:28.229127   32725 main.go:141] libmachine: (ha-174628-m03)     </disk>
	I0717 17:32:28.229139   32725 main.go:141] libmachine: (ha-174628-m03)     <disk type='file' device='disk'>
	I0717 17:32:28.229150   32725 main.go:141] libmachine: (ha-174628-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 17:32:28.229187   32725 main.go:141] libmachine: (ha-174628-m03)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/ha-174628-m03.rawdisk'/>
	I0717 17:32:28.229207   32725 main.go:141] libmachine: (ha-174628-m03)       <target dev='hda' bus='virtio'/>
	I0717 17:32:28.229221   32725 main.go:141] libmachine: (ha-174628-m03)     </disk>
	I0717 17:32:28.229232   32725 main.go:141] libmachine: (ha-174628-m03)     <interface type='network'>
	I0717 17:32:28.229244   32725 main.go:141] libmachine: (ha-174628-m03)       <source network='mk-ha-174628'/>
	I0717 17:32:28.229258   32725 main.go:141] libmachine: (ha-174628-m03)       <model type='virtio'/>
	I0717 17:32:28.229277   32725 main.go:141] libmachine: (ha-174628-m03)     </interface>
	I0717 17:32:28.229290   32725 main.go:141] libmachine: (ha-174628-m03)     <interface type='network'>
	I0717 17:32:28.229302   32725 main.go:141] libmachine: (ha-174628-m03)       <source network='default'/>
	I0717 17:32:28.229310   32725 main.go:141] libmachine: (ha-174628-m03)       <model type='virtio'/>
	I0717 17:32:28.229322   32725 main.go:141] libmachine: (ha-174628-m03)     </interface>
	I0717 17:32:28.229332   32725 main.go:141] libmachine: (ha-174628-m03)     <serial type='pty'>
	I0717 17:32:28.229341   32725 main.go:141] libmachine: (ha-174628-m03)       <target port='0'/>
	I0717 17:32:28.229350   32725 main.go:141] libmachine: (ha-174628-m03)     </serial>
	I0717 17:32:28.229379   32725 main.go:141] libmachine: (ha-174628-m03)     <console type='pty'>
	I0717 17:32:28.229399   32725 main.go:141] libmachine: (ha-174628-m03)       <target type='serial' port='0'/>
	I0717 17:32:28.229414   32725 main.go:141] libmachine: (ha-174628-m03)     </console>
	I0717 17:32:28.229425   32725 main.go:141] libmachine: (ha-174628-m03)     <rng model='virtio'>
	I0717 17:32:28.229439   32725 main.go:141] libmachine: (ha-174628-m03)       <backend model='random'>/dev/random</backend>
	I0717 17:32:28.229454   32725 main.go:141] libmachine: (ha-174628-m03)     </rng>
	I0717 17:32:28.229464   32725 main.go:141] libmachine: (ha-174628-m03)     
	I0717 17:32:28.229475   32725 main.go:141] libmachine: (ha-174628-m03)     
	I0717 17:32:28.229487   32725 main.go:141] libmachine: (ha-174628-m03)   </devices>
	I0717 17:32:28.229496   32725 main.go:141] libmachine: (ha-174628-m03) </domain>
	I0717 17:32:28.229506   32725 main.go:141] libmachine: (ha-174628-m03) 
	I0717 17:32:28.236645   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:87:41:a4 in network default
	I0717 17:32:28.237177   32725 main.go:141] libmachine: (ha-174628-m03) Ensuring networks are active...
	I0717 17:32:28.237192   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:28.237810   32725 main.go:141] libmachine: (ha-174628-m03) Ensuring network default is active
	I0717 17:32:28.238073   32725 main.go:141] libmachine: (ha-174628-m03) Ensuring network mk-ha-174628 is active
	I0717 17:32:28.238357   32725 main.go:141] libmachine: (ha-174628-m03) Getting domain xml...
	I0717 17:32:28.239064   32725 main.go:141] libmachine: (ha-174628-m03) Creating domain...
	I0717 17:32:29.458219   32725 main.go:141] libmachine: (ha-174628-m03) Waiting to get IP...
	I0717 17:32:29.459153   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:29.459623   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:29.459644   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:29.459603   33749 retry.go:31] will retry after 192.524869ms: waiting for machine to come up
	I0717 17:32:29.654067   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:29.654552   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:29.654576   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:29.654509   33749 retry.go:31] will retry after 255.817162ms: waiting for machine to come up
	I0717 17:32:29.911892   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:29.912304   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:29.912331   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:29.912265   33749 retry.go:31] will retry after 303.807574ms: waiting for machine to come up
	I0717 17:32:30.217818   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:30.218235   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:30.218256   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:30.218193   33749 retry.go:31] will retry after 370.345102ms: waiting for machine to come up
	I0717 17:32:30.589636   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:30.590142   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:30.590172   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:30.590090   33749 retry.go:31] will retry after 634.938743ms: waiting for machine to come up
	I0717 17:32:31.226831   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:31.227384   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:31.227421   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:31.227366   33749 retry.go:31] will retry after 656.775829ms: waiting for machine to come up
	I0717 17:32:31.886438   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:31.886791   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:31.886821   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:31.886749   33749 retry.go:31] will retry after 817.914558ms: waiting for machine to come up
	I0717 17:32:32.705616   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:32.705977   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:32.706002   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:32.705934   33749 retry.go:31] will retry after 1.159163832s: waiting for machine to come up
	I0717 17:32:33.867104   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:33.867567   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:33.867593   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:33.867530   33749 retry.go:31] will retry after 1.236671526s: waiting for machine to come up
	I0717 17:32:35.105805   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:35.106230   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:35.106253   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:35.106187   33749 retry.go:31] will retry after 2.082191353s: waiting for machine to come up
	I0717 17:32:37.190467   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:37.190882   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:37.190907   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:37.190844   33749 retry.go:31] will retry after 2.239846165s: waiting for machine to come up
	I0717 17:32:39.431818   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:39.432388   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:39.432409   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:39.432355   33749 retry.go:31] will retry after 2.202455513s: waiting for machine to come up
	I0717 17:32:41.636343   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:41.636755   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:41.636778   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:41.636719   33749 retry.go:31] will retry after 4.069466996s: waiting for machine to come up
	I0717 17:32:45.707317   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:45.707823   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:45.707864   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:45.707796   33749 retry.go:31] will retry after 4.852459037s: waiting for machine to come up
	I0717 17:32:50.562133   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.562635   32725 main.go:141] libmachine: (ha-174628-m03) Found IP for machine: 192.168.39.187
	I0717 17:32:50.562667   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has current primary IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.562676   32725 main.go:141] libmachine: (ha-174628-m03) Reserving static IP address...
	I0717 17:32:50.563216   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find host DHCP lease matching {name: "ha-174628-m03", mac: "52:54:00:4c:e1:a8", ip: "192.168.39.187"} in network mk-ha-174628
	I0717 17:32:50.638208   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Getting to WaitForSSH function...
	I0717 17:32:50.638240   32725 main.go:141] libmachine: (ha-174628-m03) Reserved static IP address: 192.168.39.187
	I0717 17:32:50.638254   32725 main.go:141] libmachine: (ha-174628-m03) Waiting for SSH to be available...
	I0717 17:32:50.641124   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.641703   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:50.641733   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.641896   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Using SSH client type: external
	I0717 17:32:50.641922   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa (-rw-------)
	I0717 17:32:50.641997   32725 main.go:141] libmachine: (ha-174628-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 17:32:50.642021   32725 main.go:141] libmachine: (ha-174628-m03) DBG | About to run SSH command:
	I0717 17:32:50.642034   32725 main.go:141] libmachine: (ha-174628-m03) DBG | exit 0
	I0717 17:32:50.769047   32725 main.go:141] libmachine: (ha-174628-m03) DBG | SSH cmd err, output: <nil>: 
	I0717 17:32:50.769300   32725 main.go:141] libmachine: (ha-174628-m03) KVM machine creation complete!
	I0717 17:32:50.769649   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetConfigRaw
	I0717 17:32:50.770180   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:50.770431   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:50.770598   32725 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 17:32:50.770611   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetState
	I0717 17:32:50.771822   32725 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 17:32:50.771847   32725 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 17:32:50.771856   32725 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 17:32:50.771866   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:50.774382   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.774707   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:50.774736   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.774863   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:50.775019   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:50.775181   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:50.775317   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:50.775468   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:32:50.775713   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0717 17:32:50.775728   32725 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 17:32:50.880086   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:32:50.880115   32725 main.go:141] libmachine: Detecting the provisioner...
	I0717 17:32:50.880123   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:50.882869   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.883361   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:50.883389   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.883603   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:50.883835   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:50.883977   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:50.884100   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:50.884229   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:32:50.884441   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0717 17:32:50.884457   32725 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 17:32:50.989607   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 17:32:50.989684   32725 main.go:141] libmachine: found compatible host: buildroot
	I0717 17:32:50.989693   32725 main.go:141] libmachine: Provisioning with buildroot...
	I0717 17:32:50.989703   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetMachineName
	I0717 17:32:50.989952   32725 buildroot.go:166] provisioning hostname "ha-174628-m03"
	I0717 17:32:50.989981   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetMachineName
	I0717 17:32:50.990157   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:50.993246   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.993618   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:50.993648   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.993822   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:50.994010   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:50.994163   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:50.994383   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:50.994563   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:32:50.994754   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0717 17:32:50.994771   32725 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174628-m03 && echo "ha-174628-m03" | sudo tee /etc/hostname
	I0717 17:32:51.115100   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174628-m03
	
	I0717 17:32:51.115133   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:51.117990   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.118349   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.118388   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.118556   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:51.118733   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.118920   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.119058   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:51.119241   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:32:51.119457   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0717 17:32:51.119473   32725 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174628-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174628-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174628-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 17:32:51.234384   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:32:51.234422   32725 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 17:32:51.234441   32725 buildroot.go:174] setting up certificates
	I0717 17:32:51.234451   32725 provision.go:84] configureAuth start
	I0717 17:32:51.234459   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetMachineName
	I0717 17:32:51.234726   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:32:51.237783   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.238222   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.238249   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.238482   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:51.240655   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.241000   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.241025   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.241218   32725 provision.go:143] copyHostCerts
	I0717 17:32:51.241243   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:32:51.241274   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 17:32:51.241283   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:32:51.241355   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 17:32:51.241432   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:32:51.241449   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 17:32:51.241456   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:32:51.241481   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 17:32:51.241526   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:32:51.241544   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 17:32:51.241550   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:32:51.241581   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 17:32:51.241632   32725 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.ha-174628-m03 san=[127.0.0.1 192.168.39.187 ha-174628-m03 localhost minikube]
	I0717 17:32:51.438209   32725 provision.go:177] copyRemoteCerts
	I0717 17:32:51.438266   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 17:32:51.438288   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:51.441235   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.441643   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.441670   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.441882   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:51.442123   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.442254   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:51.442473   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:32:51.523065   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 17:32:51.523145   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 17:32:51.546335   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 17:32:51.546401   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 17:32:51.570411   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 17:32:51.570499   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 17:32:51.595232   32725 provision.go:87] duration metric: took 360.767696ms to configureAuth
	I0717 17:32:51.595256   32725 buildroot.go:189] setting minikube options for container-runtime
	I0717 17:32:51.595510   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:32:51.595615   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:51.598390   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.598850   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.598879   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.599104   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:51.599295   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.599473   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.599621   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:51.599834   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:32:51.600027   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0717 17:32:51.600049   32725 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 17:32:51.860783   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 17:32:51.860809   32725 main.go:141] libmachine: Checking connection to Docker...
	I0717 17:32:51.860822   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetURL
	I0717 17:32:51.862219   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Using libvirt version 6000000
	I0717 17:32:51.864575   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.864983   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.865012   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.865175   32725 main.go:141] libmachine: Docker is up and running!
	I0717 17:32:51.865191   32725 main.go:141] libmachine: Reticulating splines...
	I0717 17:32:51.865200   32725 client.go:171] duration metric: took 23.91429607s to LocalClient.Create
	I0717 17:32:51.865228   32725 start.go:167] duration metric: took 23.914361787s to libmachine.API.Create "ha-174628"
	I0717 17:32:51.865246   32725 start.go:293] postStartSetup for "ha-174628-m03" (driver="kvm2")
	I0717 17:32:51.865267   32725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 17:32:51.865292   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:51.865542   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 17:32:51.865579   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:51.867591   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.867877   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.867897   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.868048   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:51.868205   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.868334   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:51.868477   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:32:51.951472   32725 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 17:32:51.955703   32725 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 17:32:51.955728   32725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 17:32:51.955785   32725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 17:32:51.955863   32725 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 17:32:51.955875   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /etc/ssl/certs/215772.pem
	I0717 17:32:51.955978   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 17:32:51.966128   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:32:51.988282   32725 start.go:296] duration metric: took 123.019698ms for postStartSetup
	I0717 17:32:51.988339   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetConfigRaw
	I0717 17:32:51.988868   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:32:51.991627   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.992133   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.992196   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.992428   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:32:51.992611   32725 start.go:128] duration metric: took 24.060201383s to createHost
	I0717 17:32:51.992643   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:51.994793   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.995153   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.995189   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.995322   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:51.995518   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.995650   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.995784   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:51.995931   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:32:51.996120   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0717 17:32:51.996141   32725 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 17:32:52.101672   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237572.079082425
	
	I0717 17:32:52.101699   32725 fix.go:216] guest clock: 1721237572.079082425
	I0717 17:32:52.101709   32725 fix.go:229] Guest: 2024-07-17 17:32:52.079082425 +0000 UTC Remote: 2024-07-17 17:32:51.992633283 +0000 UTC m=+215.699559180 (delta=86.449142ms)
	I0717 17:32:52.101735   32725 fix.go:200] guest clock delta is within tolerance: 86.449142ms
	I0717 17:32:52.101750   32725 start.go:83] releasing machines lock for "ha-174628-m03", held for 24.169461849s
	I0717 17:32:52.101778   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:52.102081   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:32:52.104685   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:52.105074   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:52.105103   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:52.107459   32725 out.go:177] * Found network options:
	I0717 17:32:52.108860   32725 out.go:177]   - NO_PROXY=192.168.39.100,192.168.39.97
	W0717 17:32:52.110135   32725 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 17:32:52.110161   32725 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 17:32:52.110174   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:52.110746   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:52.110932   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:52.111044   32725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 17:32:52.111079   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	W0717 17:32:52.111158   32725 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 17:32:52.111182   32725 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 17:32:52.111257   32725 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 17:32:52.111277   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:52.115229   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:52.115352   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:52.115724   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:52.115748   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:52.115773   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:52.115792   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:52.115904   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:52.116017   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:52.116114   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:52.116207   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:52.116275   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:52.116409   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:32:52.116425   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:52.116592   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:32:52.351926   32725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 17:32:52.357460   32725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 17:32:52.357535   32725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 17:32:52.372594   32725 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 17:32:52.372613   32725 start.go:495] detecting cgroup driver to use...
	I0717 17:32:52.372669   32725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 17:32:52.387789   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 17:32:52.401328   32725 docker.go:217] disabling cri-docker service (if available) ...
	I0717 17:32:52.401390   32725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 17:32:52.415399   32725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 17:32:52.428310   32725 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 17:32:52.547805   32725 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 17:32:52.686824   32725 docker.go:233] disabling docker service ...
	I0717 17:32:52.686894   32725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 17:32:52.701619   32725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 17:32:52.714722   32725 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 17:32:52.857434   32725 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 17:32:52.974350   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 17:32:52.988214   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 17:32:53.006069   32725 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 17:32:53.006132   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.017180   32725 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 17:32:53.017255   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.027942   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.037867   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.047784   32725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 17:32:53.057458   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.067514   32725 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.082777   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.092567   32725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 17:32:53.101279   32725 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 17:32:53.101334   32725 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 17:32:53.112489   32725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 17:32:53.120888   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:32:53.232964   32725 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 17:32:53.371442   32725 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 17:32:53.371507   32725 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 17:32:53.375863   32725 start.go:563] Will wait 60s for crictl version
	I0717 17:32:53.375922   32725 ssh_runner.go:195] Run: which crictl
	I0717 17:32:53.379325   32725 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 17:32:53.417781   32725 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 17:32:53.417871   32725 ssh_runner.go:195] Run: crio --version
	I0717 17:32:53.448362   32725 ssh_runner.go:195] Run: crio --version
	I0717 17:32:53.479015   32725 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 17:32:53.480665   32725 out.go:177]   - env NO_PROXY=192.168.39.100
	I0717 17:32:53.482211   32725 out.go:177]   - env NO_PROXY=192.168.39.100,192.168.39.97
	I0717 17:32:53.483754   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:32:53.487140   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:53.487538   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:53.487569   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:53.487773   32725 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 17:32:53.492188   32725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:32:53.504283   32725 mustload.go:65] Loading cluster: ha-174628
	I0717 17:32:53.504522   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:32:53.504863   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:32:53.504912   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:32:53.520585   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40729
	I0717 17:32:53.520979   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:32:53.521520   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:32:53.521555   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:32:53.521870   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:32:53.522067   32725 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:32:53.523866   32725 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:32:53.524289   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:32:53.524343   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:32:53.539209   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39055
	I0717 17:32:53.539638   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:32:53.540040   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:32:53.540057   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:32:53.540399   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:32:53.540604   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:32:53.540776   32725 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628 for IP: 192.168.39.187
	I0717 17:32:53.540909   32725 certs.go:194] generating shared ca certs ...
	I0717 17:32:53.540967   32725 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:32:53.541136   32725 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 17:32:53.541189   32725 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 17:32:53.541203   32725 certs.go:256] generating profile certs ...
	I0717 17:32:53.541342   32725 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key
	I0717 17:32:53.541373   32725 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.256de965
	I0717 17:32:53.541395   32725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.256de965 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.97 192.168.39.187 192.168.39.254]
	I0717 17:32:53.654771   32725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.256de965 ...
	I0717 17:32:53.654802   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.256de965: {Name:mka9a94d0ef93b6feff80505c13cb6cb0977edc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:32:53.654988   32725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.256de965 ...
	I0717 17:32:53.655002   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.256de965: {Name:mk097b9771d7a02dd6c417fdd0556e1661a3afd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:32:53.655078   32725 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.256de965 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt
	I0717 17:32:53.655247   32725 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.256de965 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key
	I0717 17:32:53.655385   32725 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key
	I0717 17:32:53.655401   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 17:32:53.655417   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 17:32:53.655432   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 17:32:53.655446   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 17:32:53.655461   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 17:32:53.655474   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 17:32:53.655485   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 17:32:53.655500   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 17:32:53.655559   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 17:32:53.655591   32725 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 17:32:53.655602   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 17:32:53.655627   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 17:32:53.655653   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 17:32:53.655676   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 17:32:53.655719   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:32:53.655750   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /usr/share/ca-certificates/215772.pem
	I0717 17:32:53.655765   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:32:53.655780   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem -> /usr/share/ca-certificates/21577.pem
	I0717 17:32:53.655812   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:32:53.658925   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:32:53.659339   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:32:53.659367   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:32:53.659536   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:32:53.659736   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:32:53.659892   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:32:53.660020   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:32:53.729334   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 17:32:53.734050   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 17:32:53.751730   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 17:32:53.756178   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0717 17:32:53.766181   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 17:32:53.770225   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 17:32:53.780232   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 17:32:53.784036   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0717 17:32:53.793503   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 17:32:53.797105   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 17:32:53.807668   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 17:32:53.811652   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 17:32:53.822961   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 17:32:53.848987   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 17:32:53.872822   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 17:32:53.895475   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 17:32:53.919273   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0717 17:32:53.943792   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 17:32:53.968284   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 17:32:53.992314   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 17:32:54.016776   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 17:32:54.041042   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 17:32:54.065355   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 17:32:54.087492   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 17:32:54.103125   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0717 17:32:54.118399   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 17:32:54.132971   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0717 17:32:54.149744   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 17:32:54.164764   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 17:32:54.180784   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 17:32:54.196104   32725 ssh_runner.go:195] Run: openssl version
	I0717 17:32:54.201662   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 17:32:54.211240   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:32:54.215270   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:32:54.215319   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:32:54.220442   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 17:32:54.229827   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 17:32:54.239667   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 17:32:54.243710   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 17:32:54.243773   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 17:32:54.249216   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 17:32:54.259473   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 17:32:54.269225   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 17:32:54.273248   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 17:32:54.273299   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 17:32:54.278432   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 17:32:54.287892   32725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 17:32:54.291981   32725 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 17:32:54.292036   32725 kubeadm.go:934] updating node {m03 192.168.39.187 8443 v1.30.2 crio true true} ...
	I0717 17:32:54.292146   32725 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174628-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 17:32:54.292178   32725 kube-vip.go:115] generating kube-vip config ...
	I0717 17:32:54.292207   32725 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 17:32:54.306918   32725 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 17:32:54.306996   32725 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 17:32:54.307065   32725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 17:32:54.317023   32725 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 17:32:54.317082   32725 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 17:32:54.326591   32725 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0717 17:32:54.326637   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:32:54.326593   32725 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 17:32:54.326588   32725 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0717 17:32:54.326705   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 17:32:54.326720   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 17:32:54.326777   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 17:32:54.326782   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 17:32:54.343547   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 17:32:54.343569   32725 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 17:32:54.343589   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 17:32:54.343649   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 17:32:54.343708   32725 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 17:32:54.343739   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 17:32:54.382827   32725 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 17:32:54.382860   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 17:32:55.129761   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 17:32:55.139472   32725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 17:32:55.156008   32725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 17:32:55.174308   32725 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 17:32:55.191174   32725 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 17:32:55.194996   32725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:32:55.207583   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:32:55.328277   32725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:32:55.352521   32725 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:32:55.353063   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:32:55.353118   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:32:55.368193   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37793
	I0717 17:32:55.368665   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:32:55.369265   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:32:55.369292   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:32:55.369624   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:32:55.369864   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:32:55.370050   32725 start.go:317] joinCluster: &{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:32:55.370201   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 17:32:55.370220   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:32:55.373242   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:32:55.373707   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:32:55.373732   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:32:55.373923   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:32:55.374110   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:32:55.374299   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:32:55.374447   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:32:55.536354   32725 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:32:55.536405   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vk8qsb.12drdxxthtwm1ogt --discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174628-m03 --control-plane --apiserver-advertise-address=192.168.39.187 --apiserver-bind-port=8443"
	I0717 17:33:19.274771   32725 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vk8qsb.12drdxxthtwm1ogt --discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174628-m03 --control-plane --apiserver-advertise-address=192.168.39.187 --apiserver-bind-port=8443": (23.738336094s)
	I0717 17:33:19.274813   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 17:33:19.815424   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174628-m03 minikube.k8s.io/updated_at=2024_07_17T17_33_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=ha-174628 minikube.k8s.io/primary=false
	I0717 17:33:19.917568   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174628-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 17:33:20.036541   32725 start.go:319] duration metric: took 24.666487067s to joinCluster
	I0717 17:33:20.036649   32725 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:33:20.036991   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:33:20.037950   32725 out.go:177] * Verifying Kubernetes components...
	I0717 17:33:20.038778   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:33:20.268248   32725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:33:20.287736   32725 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:33:20.287994   32725 kapi.go:59] client config for ha-174628: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.crt", KeyFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key", CAFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 17:33:20.288072   32725 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.100:8443
	I0717 17:33:20.288315   32725 node_ready.go:35] waiting up to 6m0s for node "ha-174628-m03" to be "Ready" ...
	I0717 17:33:20.288406   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:20.288415   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:20.288422   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:20.288426   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:20.292742   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:33:20.789485   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:20.789515   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:20.789529   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:20.789535   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:20.793251   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:21.289180   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:21.289200   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:21.289208   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:21.289213   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:21.292997   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:21.789506   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:21.789526   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:21.789537   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:21.789542   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:21.793304   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:22.288588   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:22.288614   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:22.288622   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:22.288626   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:22.291561   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:22.292125   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:22.788488   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:22.788510   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:22.788521   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:22.788530   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:22.791355   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:23.288918   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:23.288960   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:23.288973   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:23.288977   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:23.298962   32725 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 17:33:23.788564   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:23.788587   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:23.788596   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:23.788603   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:23.792071   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:24.288548   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:24.288576   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:24.288587   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:24.288592   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:24.292616   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:33:24.293180   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:24.788719   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:24.788739   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:24.788747   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:24.788750   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:24.791624   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:25.288577   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:25.288597   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:25.288605   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:25.288616   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:25.291759   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:25.789482   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:25.789510   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:25.789521   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:25.789525   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:25.793388   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:26.289346   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:26.289368   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:26.289376   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:26.289380   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:26.292671   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:26.293560   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:26.788771   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:26.788795   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:26.788806   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:26.788813   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:26.792188   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:27.288499   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:27.288520   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:27.288528   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:27.288532   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:27.291613   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:27.789524   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:27.789548   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:27.789561   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:27.789570   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:27.793307   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:28.288669   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:28.288691   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:28.288699   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:28.288703   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:28.292413   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:28.789196   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:28.789221   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:28.789232   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:28.789240   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:28.792624   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:28.793214   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:29.288597   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:29.288620   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:29.288631   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:29.288641   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:29.291909   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:29.789374   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:29.789395   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:29.789402   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:29.789405   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:29.793097   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:30.289204   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:30.289228   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:30.289240   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:30.289246   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:30.292771   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:30.789490   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:30.789511   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:30.789518   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:30.789523   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:30.793133   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:30.793741   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:31.289202   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:31.289226   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:31.289232   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:31.289236   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:31.292290   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:31.789043   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:31.789066   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:31.789073   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:31.789076   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:31.792378   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:32.288973   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:32.288993   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:32.289001   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:32.289005   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:32.292148   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:32.788928   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:32.788971   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:32.788982   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:32.788987   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:32.792255   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:33.288589   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:33.288609   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:33.288619   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:33.288624   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:33.291393   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:33.292006   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:33.789035   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:33.789057   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:33.789064   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:33.789068   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:33.792158   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:34.289171   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:34.289195   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:34.289204   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:34.289211   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:34.293745   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:33:34.788898   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:34.788922   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:34.788934   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:34.788940   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:34.792432   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:35.289082   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:35.289102   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:35.289110   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:35.289113   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:35.292225   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:35.292913   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:35.788743   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:35.788769   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:35.788781   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:35.788810   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:35.791885   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:36.288995   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:36.289023   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:36.289032   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:36.289037   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:36.292342   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:36.789265   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:36.789290   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:36.789298   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:36.789318   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:36.792603   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:37.288510   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:37.288538   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.288550   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.288556   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.292041   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:37.788631   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:37.788653   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.788663   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.788669   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.791737   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:37.792484   32725 node_ready.go:49] node "ha-174628-m03" has status "Ready":"True"
	I0717 17:33:37.792504   32725 node_ready.go:38] duration metric: took 17.504173412s for node "ha-174628-m03" to be "Ready" ...
	I0717 17:33:37.792513   32725 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 17:33:37.792585   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:33:37.792598   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.792623   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.792632   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.800718   32725 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 17:33:37.808766   32725 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ljjl7" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.808834   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ljjl7
	I0717 17:33:37.808842   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.808849   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.808855   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.811798   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:37.812761   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:37.812775   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.812783   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.812788   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.815155   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:37.815818   32725 pod_ready.go:92] pod "coredns-7db6d8ff4d-ljjl7" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:37.815836   32725 pod_ready.go:81] duration metric: took 7.048784ms for pod "coredns-7db6d8ff4d-ljjl7" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.815848   32725 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nb567" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.815908   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nb567
	I0717 17:33:37.815919   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.815929   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.815938   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.818715   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:37.819439   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:37.819456   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.819466   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.819472   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.821430   32725 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 17:33:37.821860   32725 pod_ready.go:92] pod "coredns-7db6d8ff4d-nb567" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:37.821873   32725 pod_ready.go:81] duration metric: took 6.018832ms for pod "coredns-7db6d8ff4d-nb567" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.821884   32725 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.821934   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174628
	I0717 17:33:37.821941   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.821948   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.821955   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.823945   32725 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 17:33:37.824397   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:37.824411   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.824420   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.824428   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.826558   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:37.827099   32725 pod_ready.go:92] pod "etcd-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:37.827115   32725 pod_ready.go:81] duration metric: took 5.22081ms for pod "etcd-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.827125   32725 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.827176   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174628-m02
	I0717 17:33:37.827187   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.827197   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.827204   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.829485   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:37.830061   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:37.830076   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.830087   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.830092   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.832117   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:37.832485   32725 pod_ready.go:92] pod "etcd-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:37.832501   32725 pod_ready.go:81] duration metric: took 5.37018ms for pod "etcd-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.832509   32725 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.988954   32725 request.go:629] Waited for 156.367687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174628-m03
	I0717 17:33:37.989014   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174628-m03
	I0717 17:33:37.989021   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.989035   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.989042   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.992402   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:38.189553   32725 request.go:629] Waited for 196.312101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:38.189608   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:38.189613   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:38.189620   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:38.189623   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:38.192602   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:38.193114   32725 pod_ready.go:92] pod "etcd-ha-174628-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:38.193131   32725 pod_ready.go:81] duration metric: took 360.615576ms for pod "etcd-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:38.193146   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:38.389397   32725 request.go:629] Waited for 196.170987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628
	I0717 17:33:38.389468   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628
	I0717 17:33:38.389476   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:38.389483   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:38.389491   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:38.392753   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:38.589173   32725 request.go:629] Waited for 195.758114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:38.589234   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:38.589255   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:38.589269   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:38.589276   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:38.592906   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:38.593635   32725 pod_ready.go:92] pod "kube-apiserver-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:38.593653   32725 pod_ready.go:81] duration metric: took 400.501461ms for pod "kube-apiserver-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:38.593666   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:38.788628   32725 request.go:629] Waited for 194.886624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628-m02
	I0717 17:33:38.788711   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628-m02
	I0717 17:33:38.788723   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:38.788737   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:38.788746   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:38.791882   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:38.988998   32725 request.go:629] Waited for 196.377182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:38.989057   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:38.989082   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:38.989090   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:38.989097   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:38.992029   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:38.992576   32725 pod_ready.go:92] pod "kube-apiserver-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:38.992597   32725 pod_ready.go:81] duration metric: took 398.922342ms for pod "kube-apiserver-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:38.992606   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:39.189599   32725 request.go:629] Waited for 196.906829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628-m03
	I0717 17:33:39.189654   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628-m03
	I0717 17:33:39.189659   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:39.189666   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:39.189670   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:39.192838   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:39.389338   32725 request.go:629] Waited for 195.774064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:39.389420   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:39.389428   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:39.389438   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:39.389447   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:39.392631   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:39.393101   32725 pod_ready.go:92] pod "kube-apiserver-ha-174628-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:39.393119   32725 pod_ready.go:81] duration metric: took 400.507241ms for pod "kube-apiserver-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:39.393128   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:39.589555   32725 request.go:629] Waited for 196.347589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628
	I0717 17:33:39.589608   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628
	I0717 17:33:39.589613   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:39.589620   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:39.589625   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:39.593686   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:33:39.788964   32725 request.go:629] Waited for 194.35981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:39.789030   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:39.789038   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:39.789046   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:39.789054   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:39.791748   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:39.792265   32725 pod_ready.go:92] pod "kube-controller-manager-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:39.792284   32725 pod_ready.go:81] duration metric: took 399.148814ms for pod "kube-controller-manager-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:39.792296   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:39.989380   32725 request.go:629] Waited for 197.009462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628-m02
	I0717 17:33:39.989451   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628-m02
	I0717 17:33:39.989456   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:39.989463   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:39.989467   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:39.992676   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:40.188963   32725 request.go:629] Waited for 195.353988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:40.189020   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:40.189026   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:40.189033   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:40.189037   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:40.191916   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:40.192530   32725 pod_ready.go:92] pod "kube-controller-manager-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:40.192547   32725 pod_ready.go:81] duration metric: took 400.243601ms for pod "kube-controller-manager-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:40.192560   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:40.389113   32725 request.go:629] Waited for 196.485647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628-m03
	I0717 17:33:40.389196   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628-m03
	I0717 17:33:40.389208   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:40.389220   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:40.389232   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:40.392007   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:40.589259   32725 request.go:629] Waited for 196.36758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:40.589350   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:40.589363   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:40.589375   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:40.589383   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:40.593013   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:40.593724   32725 pod_ready.go:92] pod "kube-controller-manager-ha-174628-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:40.593743   32725 pod_ready.go:81] duration metric: took 401.175109ms for pod "kube-controller-manager-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:40.593753   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7lchn" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:40.788707   32725 request.go:629] Waited for 194.878686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lchn
	I0717 17:33:40.788775   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lchn
	I0717 17:33:40.788783   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:40.788794   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:40.788803   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:40.792019   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:40.989250   32725 request.go:629] Waited for 196.366888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:40.989312   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:40.989320   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:40.989330   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:40.989338   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:40.992429   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:40.992975   32725 pod_ready.go:92] pod "kube-proxy-7lchn" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:40.992999   32725 pod_ready.go:81] duration metric: took 399.240857ms for pod "kube-proxy-7lchn" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:40.993009   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fqf9q" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:41.189116   32725 request.go:629] Waited for 196.047614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fqf9q
	I0717 17:33:41.189234   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fqf9q
	I0717 17:33:41.189247   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:41.189259   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:41.189269   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:41.193100   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:41.388968   32725 request.go:629] Waited for 195.125881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:41.389033   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:41.389038   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:41.389046   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:41.389050   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:41.392715   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:41.393180   32725 pod_ready.go:92] pod "kube-proxy-fqf9q" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:41.393203   32725 pod_ready.go:81] duration metric: took 400.188353ms for pod "kube-proxy-fqf9q" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:41.393213   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tjkww" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:41.588629   32725 request.go:629] Waited for 195.34713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjkww
	I0717 17:33:41.588719   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjkww
	I0717 17:33:41.588729   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:41.588737   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:41.588743   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:41.591926   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:41.788928   32725 request.go:629] Waited for 196.346277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:41.788982   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:41.788987   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:41.788994   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:41.788997   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:41.792516   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:41.793183   32725 pod_ready.go:92] pod "kube-proxy-tjkww" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:41.793202   32725 pod_ready.go:81] duration metric: took 399.97971ms for pod "kube-proxy-tjkww" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:41.793213   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:41.989311   32725 request.go:629] Waited for 196.032696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628
	I0717 17:33:41.989373   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628
	I0717 17:33:41.989379   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:41.989387   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:41.989396   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:41.992762   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:42.189600   32725 request.go:629] Waited for 196.361839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:42.189658   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:42.189663   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:42.189671   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:42.189677   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:42.192370   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:42.192995   32725 pod_ready.go:92] pod "kube-scheduler-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:42.193014   32725 pod_ready.go:81] duration metric: took 399.792549ms for pod "kube-scheduler-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:42.193026   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:42.389023   32725 request.go:629] Waited for 195.940626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628-m02
	I0717 17:33:42.389099   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628-m02
	I0717 17:33:42.389106   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:42.389117   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:42.389129   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:42.392155   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:42.589175   32725 request.go:629] Waited for 196.356601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:42.589243   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:42.589251   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:42.589263   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:42.589275   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:42.592976   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:42.593570   32725 pod_ready.go:92] pod "kube-scheduler-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:42.593588   32725 pod_ready.go:81] duration metric: took 400.555408ms for pod "kube-scheduler-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:42.593598   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:42.789676   32725 request.go:629] Waited for 195.992794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628-m03
	I0717 17:33:42.789728   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628-m03
	I0717 17:33:42.789733   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:42.789740   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:42.789746   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:42.793492   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:42.989430   32725 request.go:629] Waited for 195.274096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:42.989494   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:42.989502   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:42.989515   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:42.989526   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:42.992848   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:42.993560   32725 pod_ready.go:92] pod "kube-scheduler-ha-174628-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:42.993580   32725 pod_ready.go:81] duration metric: took 399.973603ms for pod "kube-scheduler-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:42.993593   32725 pod_ready.go:38] duration metric: took 5.201069604s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 17:33:42.993616   32725 api_server.go:52] waiting for apiserver process to appear ...
	I0717 17:33:42.993679   32725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:33:43.011575   32725 api_server.go:72] duration metric: took 22.974885333s to wait for apiserver process to appear ...
	I0717 17:33:43.011599   32725 api_server.go:88] waiting for apiserver healthz status ...
	I0717 17:33:43.011616   32725 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0717 17:33:43.018029   32725 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0717 17:33:43.018106   32725 round_trippers.go:463] GET https://192.168.39.100:8443/version
	I0717 17:33:43.018116   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:43.018129   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:43.018138   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:43.019123   32725 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 17:33:43.019262   32725 api_server.go:141] control plane version: v1.30.2
	I0717 17:33:43.019282   32725 api_server.go:131] duration metric: took 7.675984ms to wait for apiserver health ...
	I0717 17:33:43.019293   32725 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 17:33:43.188619   32725 request.go:629] Waited for 169.253857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:33:43.188678   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:33:43.188683   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:43.188701   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:43.188705   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:43.196310   32725 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 17:33:43.202224   32725 system_pods.go:59] 24 kube-system pods found
	I0717 17:33:43.202249   32725 system_pods.go:61] "coredns-7db6d8ff4d-ljjl7" [2c4857a1-6ccd-4122-80b5-f5bcfd2e307f] Running
	I0717 17:33:43.202254   32725 system_pods.go:61] "coredns-7db6d8ff4d-nb567" [1739ac64-be05-4438-9a8f-a0d2821a1650] Running
	I0717 17:33:43.202257   32725 system_pods.go:61] "etcd-ha-174628" [005dbd48-14a2-458a-a8b3-252696a4ce85] Running
	I0717 17:33:43.202261   32725 system_pods.go:61] "etcd-ha-174628-m02" [6598f8f5-41df-46a9-bb82-fcf2ad182e60] Running
	I0717 17:33:43.202265   32725 system_pods.go:61] "etcd-ha-174628-m03" [6b96cf8d-24de-45b7-90d1-ebde3d5a9f7c] Running
	I0717 17:33:43.202269   32725 system_pods.go:61] "kindnet-79txz" [8c09c315-591a-4835-a433-f3bc3283f305] Running
	I0717 17:33:43.202272   32725 system_pods.go:61] "kindnet-k6jnp" [9bca93ed-aca5-4540-990c-d9e6209d12d0] Running
	I0717 17:33:43.202274   32725 system_pods.go:61] "kindnet-p7tg6" [56af22ef-0bcb-42a1-8976-117b288ef240] Running
	I0717 17:33:43.202278   32725 system_pods.go:61] "kube-apiserver-ha-174628" [3f169484-b9b1-4be6-abec-2309c0bfecba] Running
	I0717 17:33:43.202281   32725 system_pods.go:61] "kube-apiserver-ha-174628-m02" [316d349c-f099-45c3-a9ab-34fbcaeaae02] Running
	I0717 17:33:43.202284   32725 system_pods.go:61] "kube-apiserver-ha-174628-m03" [1ac2a7b1-cfbd-4e77-8711-4c82792e0cd9] Running
	I0717 17:33:43.202288   32725 system_pods.go:61] "kube-controller-manager-ha-174628" [ea259b8d-9fcb-4fb1-9e32-75d6a47e44ed] Running
	I0717 17:33:43.202293   32725 system_pods.go:61] "kube-controller-manager-ha-174628-m02" [0374a405-7fb7-4367-997e-0ac06d57338d] Running
	I0717 17:33:43.202296   32725 system_pods.go:61] "kube-controller-manager-ha-174628-m03" [c5276fed-d860-4710-992e-a1b5ec2a69c0] Running
	I0717 17:33:43.202302   32725 system_pods.go:61] "kube-proxy-7lchn" [a01b695f-ec8b-4727-9c82-4251aa34d682] Running
	I0717 17:33:43.202305   32725 system_pods.go:61] "kube-proxy-fqf9q" [f74d57a9-38a2-464d-991f-fc8905fdbe3f] Running
	I0717 17:33:43.202311   32725 system_pods.go:61] "kube-proxy-tjkww" [d50b5e14-72c3-4338-9429-40764e58ca45] Running
	I0717 17:33:43.202314   32725 system_pods.go:61] "kube-scheduler-ha-174628" [1776b347-cc13-44da-a60a-199bdb85d2c2] Running
	I0717 17:33:43.202317   32725 system_pods.go:61] "kube-scheduler-ha-174628-m02" [ce3683eb-351e-40d4-a704-13dfddc2bdea] Running
	I0717 17:33:43.202322   32725 system_pods.go:61] "kube-scheduler-ha-174628-m03" [d0a0a6ad-1daf-4991-a330-2facbd6d0f7f] Running
	I0717 17:33:43.202325   32725 system_pods.go:61] "kube-vip-ha-174628" [b2d62768-e68e-4ce3-ad84-31ddac00688e] Running
	I0717 17:33:43.202327   32725 system_pods.go:61] "kube-vip-ha-174628-m02" [a6656a18-6176-4291-a094-e4b942e9ba1c] Running
	I0717 17:33:43.202330   32725 system_pods.go:61] "kube-vip-ha-174628-m03" [e77aed0c-76e0-4a43-bcc2-f4c96b7d3b37] Running
	I0717 17:33:43.202334   32725 system_pods.go:61] "storage-provisioner" [8c0601bb-36f6-434d-8e9d-1e326bf682f5] Running
	I0717 17:33:43.202345   32725 system_pods.go:74] duration metric: took 183.046597ms to wait for pod list to return data ...
	I0717 17:33:43.202356   32725 default_sa.go:34] waiting for default service account to be created ...
	I0717 17:33:43.388703   32725 request.go:629] Waited for 186.278998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0717 17:33:43.388765   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0717 17:33:43.388771   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:43.388777   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:43.388784   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:43.391520   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:43.391645   32725 default_sa.go:45] found service account: "default"
	I0717 17:33:43.391660   32725 default_sa.go:55] duration metric: took 189.299052ms for default service account to be created ...
	I0717 17:33:43.391668   32725 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 17:33:43.588885   32725 request.go:629] Waited for 197.151009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:33:43.588961   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:33:43.588981   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:43.588992   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:43.589000   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:43.595325   32725 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 17:33:43.601693   32725 system_pods.go:86] 24 kube-system pods found
	I0717 17:33:43.601716   32725 system_pods.go:89] "coredns-7db6d8ff4d-ljjl7" [2c4857a1-6ccd-4122-80b5-f5bcfd2e307f] Running
	I0717 17:33:43.601722   32725 system_pods.go:89] "coredns-7db6d8ff4d-nb567" [1739ac64-be05-4438-9a8f-a0d2821a1650] Running
	I0717 17:33:43.601727   32725 system_pods.go:89] "etcd-ha-174628" [005dbd48-14a2-458a-a8b3-252696a4ce85] Running
	I0717 17:33:43.601732   32725 system_pods.go:89] "etcd-ha-174628-m02" [6598f8f5-41df-46a9-bb82-fcf2ad182e60] Running
	I0717 17:33:43.601736   32725 system_pods.go:89] "etcd-ha-174628-m03" [6b96cf8d-24de-45b7-90d1-ebde3d5a9f7c] Running
	I0717 17:33:43.601741   32725 system_pods.go:89] "kindnet-79txz" [8c09c315-591a-4835-a433-f3bc3283f305] Running
	I0717 17:33:43.601745   32725 system_pods.go:89] "kindnet-k6jnp" [9bca93ed-aca5-4540-990c-d9e6209d12d0] Running
	I0717 17:33:43.601750   32725 system_pods.go:89] "kindnet-p7tg6" [56af22ef-0bcb-42a1-8976-117b288ef240] Running
	I0717 17:33:43.601759   32725 system_pods.go:89] "kube-apiserver-ha-174628" [3f169484-b9b1-4be6-abec-2309c0bfecba] Running
	I0717 17:33:43.601765   32725 system_pods.go:89] "kube-apiserver-ha-174628-m02" [316d349c-f099-45c3-a9ab-34fbcaeaae02] Running
	I0717 17:33:43.601775   32725 system_pods.go:89] "kube-apiserver-ha-174628-m03" [1ac2a7b1-cfbd-4e77-8711-4c82792e0cd9] Running
	I0717 17:33:43.601782   32725 system_pods.go:89] "kube-controller-manager-ha-174628" [ea259b8d-9fcb-4fb1-9e32-75d6a47e44ed] Running
	I0717 17:33:43.601789   32725 system_pods.go:89] "kube-controller-manager-ha-174628-m02" [0374a405-7fb7-4367-997e-0ac06d57338d] Running
	I0717 17:33:43.601797   32725 system_pods.go:89] "kube-controller-manager-ha-174628-m03" [c5276fed-d860-4710-992e-a1b5ec2a69c0] Running
	I0717 17:33:43.601803   32725 system_pods.go:89] "kube-proxy-7lchn" [a01b695f-ec8b-4727-9c82-4251aa34d682] Running
	I0717 17:33:43.601811   32725 system_pods.go:89] "kube-proxy-fqf9q" [f74d57a9-38a2-464d-991f-fc8905fdbe3f] Running
	I0717 17:33:43.601817   32725 system_pods.go:89] "kube-proxy-tjkww" [d50b5e14-72c3-4338-9429-40764e58ca45] Running
	I0717 17:33:43.601825   32725 system_pods.go:89] "kube-scheduler-ha-174628" [1776b347-cc13-44da-a60a-199bdb85d2c2] Running
	I0717 17:33:43.601831   32725 system_pods.go:89] "kube-scheduler-ha-174628-m02" [ce3683eb-351e-40d4-a704-13dfddc2bdea] Running
	I0717 17:33:43.601840   32725 system_pods.go:89] "kube-scheduler-ha-174628-m03" [d0a0a6ad-1daf-4991-a330-2facbd6d0f7f] Running
	I0717 17:33:43.601846   32725 system_pods.go:89] "kube-vip-ha-174628" [b2d62768-e68e-4ce3-ad84-31ddac00688e] Running
	I0717 17:33:43.601852   32725 system_pods.go:89] "kube-vip-ha-174628-m02" [a6656a18-6176-4291-a094-e4b942e9ba1c] Running
	I0717 17:33:43.601857   32725 system_pods.go:89] "kube-vip-ha-174628-m03" [e77aed0c-76e0-4a43-bcc2-f4c96b7d3b37] Running
	I0717 17:33:43.601866   32725 system_pods.go:89] "storage-provisioner" [8c0601bb-36f6-434d-8e9d-1e326bf682f5] Running
	I0717 17:33:43.601875   32725 system_pods.go:126] duration metric: took 210.197708ms to wait for k8s-apps to be running ...
	I0717 17:33:43.601887   32725 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 17:33:43.601940   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:33:43.620097   32725 system_svc.go:56] duration metric: took 18.203606ms WaitForService to wait for kubelet
	I0717 17:33:43.620126   32725 kubeadm.go:582] duration metric: took 23.583439388s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 17:33:43.620150   32725 node_conditions.go:102] verifying NodePressure condition ...
	I0717 17:33:43.789600   32725 request.go:629] Waited for 169.359963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes
	I0717 17:33:43.789652   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes
	I0717 17:33:43.789658   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:43.789665   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:43.789671   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:43.793220   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:43.794230   32725 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 17:33:43.794252   32725 node_conditions.go:123] node cpu capacity is 2
	I0717 17:33:43.794269   32725 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 17:33:43.794272   32725 node_conditions.go:123] node cpu capacity is 2
	I0717 17:33:43.794276   32725 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 17:33:43.794279   32725 node_conditions.go:123] node cpu capacity is 2
	I0717 17:33:43.794284   32725 node_conditions.go:105] duration metric: took 174.129056ms to run NodePressure ...
	I0717 17:33:43.794297   32725 start.go:241] waiting for startup goroutines ...
	I0717 17:33:43.794319   32725 start.go:255] writing updated cluster config ...
	I0717 17:33:43.794591   32725 ssh_runner.go:195] Run: rm -f paused
	I0717 17:33:43.845016   32725 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 17:33:43.847074   32725 out.go:177] * Done! kubectl is now configured to use "ha-174628" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.522318729Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721237836522298669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5c1c56b-1097-436b-a48f-1be6d91401fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.523079641Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27d679bc-5f01-4275-8a8f-a493be7e2ba8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.523129700Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27d679bc-5f01-4275-8a8f-a493be7e2ba8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.523371162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721237628009895023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69af3791a58f6cd70f065a41e9453615e39f8d6b52615b6b10a22f9276870e64,PodSandboxId:ead8e0797918ab3cc149e030c67415a6da028f6e6438255003e750e47ddd1dd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721237422018013934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421982314266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421928423514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be
05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721237410216056539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172123740
6540056768,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370441d5e9e25be3ceff0e96f53875a159099004aa797d2570be4e3e61aa9e59,PodSandboxId:9743622035ce2bd2b9a6be8681bb69bbbf89e91f886672970a9ed528068ed1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17212373905
30419999,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70628d083fb6fd792a0e57561bb9973,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721237387075364617,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721237387046609217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9880796029aa2ee7897660b3ccd40a039526e26c4b0208d087876a8ed4a6e3dd,PodSandboxId:a5ac70b85a0a2a94429dd2f26d17401062eb6fb6872bba08b142d9e10c1dc17a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721237387002758616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb0842f9354fc3963cae2902decd174b028cb857227fdf23844b2da6a7c01ac,PodSandboxId:793e0c3a8ff473b98d0fb8e714880ffefbed7e002c92bfdae5801f1e5cac505c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721237386972165967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27d679bc-5f01-4275-8a8f-a493be7e2ba8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.558617942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94640eca-dea7-40c6-82b6-85154bddf7b7 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.558746348Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94640eca-dea7-40c6-82b6-85154bddf7b7 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.560055545Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79cc6cf3-0ca1-468c-a864-5e78f4d9820f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.560518672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721237836560490358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79cc6cf3-0ca1-468c-a864-5e78f4d9820f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.561156063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe273f07-36f4-402a-88e8-e4ddfef43d34 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.561216960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe273f07-36f4-402a-88e8-e4ddfef43d34 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.561423358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721237628009895023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69af3791a58f6cd70f065a41e9453615e39f8d6b52615b6b10a22f9276870e64,PodSandboxId:ead8e0797918ab3cc149e030c67415a6da028f6e6438255003e750e47ddd1dd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721237422018013934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421982314266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421928423514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be
05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721237410216056539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172123740
6540056768,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370441d5e9e25be3ceff0e96f53875a159099004aa797d2570be4e3e61aa9e59,PodSandboxId:9743622035ce2bd2b9a6be8681bb69bbbf89e91f886672970a9ed528068ed1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17212373905
30419999,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70628d083fb6fd792a0e57561bb9973,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721237387075364617,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721237387046609217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9880796029aa2ee7897660b3ccd40a039526e26c4b0208d087876a8ed4a6e3dd,PodSandboxId:a5ac70b85a0a2a94429dd2f26d17401062eb6fb6872bba08b142d9e10c1dc17a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721237387002758616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb0842f9354fc3963cae2902decd174b028cb857227fdf23844b2da6a7c01ac,PodSandboxId:793e0c3a8ff473b98d0fb8e714880ffefbed7e002c92bfdae5801f1e5cac505c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721237386972165967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe273f07-36f4-402a-88e8-e4ddfef43d34 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.597907556Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a9ed870-757a-4661-9d7c-43c82f4910b5 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.597979395Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a9ed870-757a-4661-9d7c-43c82f4910b5 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.598929307Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eaee9792-26cc-44b2-9345-ec84c2d12271 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.599411005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721237836599383336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eaee9792-26cc-44b2-9345-ec84c2d12271 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.600030037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9635fd2f-e77f-478f-8ed2-b696b90f5a10 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.600100659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9635fd2f-e77f-478f-8ed2-b696b90f5a10 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.600331750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721237628009895023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69af3791a58f6cd70f065a41e9453615e39f8d6b52615b6b10a22f9276870e64,PodSandboxId:ead8e0797918ab3cc149e030c67415a6da028f6e6438255003e750e47ddd1dd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721237422018013934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421982314266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421928423514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be
05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721237410216056539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172123740
6540056768,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370441d5e9e25be3ceff0e96f53875a159099004aa797d2570be4e3e61aa9e59,PodSandboxId:9743622035ce2bd2b9a6be8681bb69bbbf89e91f886672970a9ed528068ed1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17212373905
30419999,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70628d083fb6fd792a0e57561bb9973,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721237387075364617,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721237387046609217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9880796029aa2ee7897660b3ccd40a039526e26c4b0208d087876a8ed4a6e3dd,PodSandboxId:a5ac70b85a0a2a94429dd2f26d17401062eb6fb6872bba08b142d9e10c1dc17a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721237387002758616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb0842f9354fc3963cae2902decd174b028cb857227fdf23844b2da6a7c01ac,PodSandboxId:793e0c3a8ff473b98d0fb8e714880ffefbed7e002c92bfdae5801f1e5cac505c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721237386972165967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9635fd2f-e77f-478f-8ed2-b696b90f5a10 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.635588987Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fbb03de4-ac63-4c6e-b121-3b62de8c7965 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.635723824Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbb03de4-ac63-4c6e-b121-3b62de8c7965 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.636702364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1966673a-e369-4616-8660-9846c61a30ff name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.637181899Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721237836637156876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1966673a-e369-4616-8660-9846c61a30ff name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.637721813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be87a6c1-86c2-4b80-866e-43d95ba7b509 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.637786713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be87a6c1-86c2-4b80-866e-43d95ba7b509 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:37:16 ha-174628 crio[675]: time="2024-07-17 17:37:16.638024267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721237628009895023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69af3791a58f6cd70f065a41e9453615e39f8d6b52615b6b10a22f9276870e64,PodSandboxId:ead8e0797918ab3cc149e030c67415a6da028f6e6438255003e750e47ddd1dd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721237422018013934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421982314266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421928423514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be
05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721237410216056539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172123740
6540056768,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370441d5e9e25be3ceff0e96f53875a159099004aa797d2570be4e3e61aa9e59,PodSandboxId:9743622035ce2bd2b9a6be8681bb69bbbf89e91f886672970a9ed528068ed1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17212373905
30419999,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70628d083fb6fd792a0e57561bb9973,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721237387075364617,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721237387046609217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9880796029aa2ee7897660b3ccd40a039526e26c4b0208d087876a8ed4a6e3dd,PodSandboxId:a5ac70b85a0a2a94429dd2f26d17401062eb6fb6872bba08b142d9e10c1dc17a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721237387002758616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb0842f9354fc3963cae2902decd174b028cb857227fdf23844b2da6a7c01ac,PodSandboxId:793e0c3a8ff473b98d0fb8e714880ffefbed7e002c92bfdae5801f1e5cac505c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721237386972165967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be87a6c1-86c2-4b80-866e-43d95ba7b509 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	88ba3b0cb3105       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   c4d7c5b8a369b       busybox-fc5497c4f-8zv26
	69af3791a58f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ead8e0797918a       storage-provisioner
	976aeedd4a51e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   6732d32de6a25       coredns-7db6d8ff4d-ljjl7
	97987539971dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   9ca7e3b66f8e6       coredns-7db6d8ff4d-nb567
	2fefa59bf46cd       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    7 minutes ago       Running             kindnet-cni               0                   db21995c3cb31       kindnet-k6jnp
	d139046cefa3a       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      7 minutes ago       Running             kube-proxy                0                   4b7a03b7f681c       kube-proxy-fqf9q
	370441d5e9e25       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   9743622035ce2       kube-vip-ha-174628
	e1c91b7db4ab1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   d488537da1381       etcd-ha-174628
	889d28a83e85b       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      7 minutes ago       Running             kube-scheduler            0                   4c7f495eb3d6a       kube-scheduler-ha-174628
	9880796029aa2       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      7 minutes ago       Running             kube-controller-manager   0                   a5ac70b85a0a2       kube-controller-manager-ha-174628
	dbb0842f9354f       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      7 minutes ago       Running             kube-apiserver            0                   793e0c3a8ff47       kube-apiserver-ha-174628
	
	
	==> coredns [976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9] <==
	[INFO] 10.244.2.2:33788 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095709s
	[INFO] 10.244.0.4:46982 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154802s
	[INFO] 10.244.0.4:56230 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001992689s
	[INFO] 10.244.0.4:41627 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003841s
	[INFO] 10.244.0.4:58911 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145s
	[INFO] 10.244.0.4:42628 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001405769s
	[INFO] 10.244.0.4:53106 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132475s
	[INFO] 10.244.1.2:56143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010532s
	[INFO] 10.244.1.2:57864 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093166s
	[INFO] 10.244.1.2:36333 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127244s
	[INFO] 10.244.1.2:59545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001305574s
	[INFO] 10.244.1.2:38967 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068655s
	[INFO] 10.244.2.2:42756 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113607s
	[INFO] 10.244.2.2:43563 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069199s
	[INFO] 10.244.0.4:59480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109399s
	[INFO] 10.244.0.4:42046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068182s
	[INFO] 10.244.0.4:52729 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087202s
	[INFO] 10.244.1.2:54148 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075008s
	[INFO] 10.244.2.2:34613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101677s
	[INFO] 10.244.2.2:34221 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203479s
	[INFO] 10.244.0.4:35705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081127s
	[INFO] 10.244.0.4:36734 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090761s
	[INFO] 10.244.1.2:34328 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093559s
	[INFO] 10.244.1.2:39930 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149652s
	[INFO] 10.244.1.2:55584 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101975s
	
	
	==> coredns [97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb] <==
	[INFO] 10.244.2.2:56026 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.004666156s
	[INFO] 10.244.2.2:42295 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.012320218s
	[INFO] 10.244.2.2:36255 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006059425s
	[INFO] 10.244.0.4:34085 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000090919s
	[INFO] 10.244.0.4:39117 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001872218s
	[INFO] 10.244.1.2:51622 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00157319s
	[INFO] 10.244.2.2:60810 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001843s
	[INFO] 10.244.2.2:59317 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00028437s
	[INFO] 10.244.2.2:38028 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131271s
	[INFO] 10.244.0.4:34076 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171504s
	[INFO] 10.244.0.4:47718 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126429s
	[INFO] 10.244.1.2:45110 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001972368s
	[INFO] 10.244.1.2:56072 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000151997s
	[INFO] 10.244.1.2:56149 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091586s
	[INFO] 10.244.2.2:58101 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116587s
	[INFO] 10.244.2.2:38105 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059217s
	[INFO] 10.244.0.4:33680 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067251s
	[INFO] 10.244.1.2:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149516s
	[INFO] 10.244.1.2:49668 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120356s
	[INFO] 10.244.1.2:39442 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065763s
	[INFO] 10.244.2.2:49955 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116571s
	[INFO] 10.244.2.2:46651 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013941s
	[INFO] 10.244.0.4:39128 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097533s
	[INFO] 10.244.0.4:36840 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000042262s
	[INFO] 10.244.1.2:36575 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084857s
	
	
	==> describe nodes <==
	Name:               ha-174628
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T17_29_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:29:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:37:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:33:58 +0000   Wed, 17 Jul 2024 17:29:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:33:58 +0000   Wed, 17 Jul 2024 17:29:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:33:58 +0000   Wed, 17 Jul 2024 17:29:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:33:58 +0000   Wed, 17 Jul 2024 17:30:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-174628
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 38d679c72879470c96b5b9e9677b521d
	  System UUID:                38d679c7-2879-470c-96b5-b9e9677b521d
	  Boot ID:                    dc99f06a-b6ac-4ceb-b149-a41be92c5af1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8zv26              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 coredns-7db6d8ff4d-ljjl7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m10s
	  kube-system                 coredns-7db6d8ff4d-nb567             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m10s
	  kube-system                 etcd-ha-174628                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m25s
	  kube-system                 kindnet-k6jnp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m11s
	  kube-system                 kube-apiserver-ha-174628             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-controller-manager-ha-174628    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-proxy-fqf9q                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  kube-system                 kube-scheduler-ha-174628             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-vip-ha-174628                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m9s   kube-proxy       
	  Normal  Starting                 7m23s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m23s  kubelet          Node ha-174628 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m23s  kubelet          Node ha-174628 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m23s  kubelet          Node ha-174628 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m11s  node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Normal  NodeReady                6m55s  kubelet          Node ha-174628 status is now: NodeReady
	  Normal  RegisteredNode           4m55s  node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Normal  RegisteredNode           3m42s  node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	
	
	Name:               ha-174628-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T17_32_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:32:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:34:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 17:34:05 +0000   Wed, 17 Jul 2024 17:35:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 17:34:05 +0000   Wed, 17 Jul 2024 17:35:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 17:34:05 +0000   Wed, 17 Jul 2024 17:35:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 17:34:05 +0000   Wed, 17 Jul 2024 17:35:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-174628-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 903b989e686a4ab6b3e3c3b6b498bfac
	  System UUID:                903b989e-686a-4ab6-b3e3-c3b6b498bfac
	  Boot ID:                    90064014-f03d-439c-b564-d9933dddd6e9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ftgzz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-ha-174628-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m11s
	  kube-system                 kindnet-79txz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m13s
	  kube-system                 kube-apiserver-ha-174628-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-controller-manager-ha-174628-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-proxy-7lchn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-scheduler-ha-174628-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-vip-ha-174628-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m13s (x8 over 5m13s)  kubelet          Node ha-174628-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m13s (x8 over 5m13s)  kubelet          Node ha-174628-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x7 over 5m13s)  kubelet          Node ha-174628-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  RegisteredNode           3m42s                  node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-174628-m02 status is now: NodeNotReady
	
	
	Name:               ha-174628-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T17_33_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:33:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:37:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:34:18 +0000   Wed, 17 Jul 2024 17:33:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:34:18 +0000   Wed, 17 Jul 2024 17:33:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:34:18 +0000   Wed, 17 Jul 2024 17:33:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:34:18 +0000   Wed, 17 Jul 2024 17:33:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    ha-174628-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e252934bd064e64b4b5442d8b76155e
	  System UUID:                7e252934-bd06-4e64-b4b5-442d8b76155e
	  Boot ID:                    1795f76b-0f9d-4731-97d8-e0a76fec4a3b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5mnv5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 etcd-ha-174628-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m59s
	  kube-system                 kindnet-p7tg6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m
	  kube-system                 kube-apiserver-ha-174628-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-controller-manager-ha-174628-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-proxy-tjkww                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-scheduler-ha-174628-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-vip-ha-174628-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 3m56s            kube-proxy       
	  Normal  RegisteredNode           4m               node-controller  Node ha-174628-m03 event: Registered Node ha-174628-m03 in Controller
	  Normal  Starting                 4m               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m (x2 over 4m)  kubelet          Node ha-174628-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m (x2 over 4m)  kubelet          Node ha-174628-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m (x2 over 4m)  kubelet          Node ha-174628-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s            node-controller  Node ha-174628-m03 event: Registered Node ha-174628-m03 in Controller
	  Normal  RegisteredNode           3m42s            node-controller  Node ha-174628-m03 event: Registered Node ha-174628-m03 in Controller
	  Normal  NodeReady                3m39s            kubelet          Node ha-174628-m03 status is now: NodeReady
	
	
	Name:               ha-174628-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T17_34_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:34:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:37:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:34:48 +0000   Wed, 17 Jul 2024 17:34:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:34:48 +0000   Wed, 17 Jul 2024 17:34:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:34:48 +0000   Wed, 17 Jul 2024 17:34:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:34:48 +0000   Wed, 17 Jul 2024 17:34:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-174628-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1beb916d1ab94a9e97732204939d8f7c
	  System UUID:                1beb916d-1ab9-4a9e-9773-2204939d8f7c
	  Boot ID:                    f498dee8-37a8-457e-b1f0-32546079d21b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pt58p       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m
	  kube-system                 kube-proxy-gb548    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m54s            kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-174628-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-174628-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-174628-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m58s            node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal  RegisteredNode           2m57s            node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal  NodeReady                2m41s            kubelet          Node ha-174628-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul17 17:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050826] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.422835] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.691787] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.517319] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.259835] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.065539] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054802] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.175947] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.103995] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.251338] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.953322] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +4.318399] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +0.059032] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.943760] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.083790] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.749430] kauditd_printk_skb: 18 callbacks suppressed
	[Jul17 17:30] kauditd_printk_skb: 38 callbacks suppressed
	[Jul17 17:32] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147] <==
	{"level":"warn","ts":"2024-07-17T17:37:16.829131Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.899577Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.907167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.910572Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.924974Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.929429Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.932028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.938648Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.941818Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.944845Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.95294Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.958844Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.964551Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.968276Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.971304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.978593Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.98472Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.990875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.994638Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:16.99743Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:17.003355Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:17.012652Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:17.021106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:17.027498Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:37:17.028801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:37:17 up 7 min,  0 users,  load average: 0.16, 0.27, 0.16
	Linux ha-174628 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0] <==
	I0717 17:36:41.259241       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:36:51.265275       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:36:51.265317       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:36:51.265451       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:36:51.265470       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:36:51.265519       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:36:51.265524       1 main.go:303] handling current node
	I0717 17:36:51.265534       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:36:51.265537       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:37:01.265630       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:37:01.265788       1 main.go:303] handling current node
	I0717 17:37:01.265816       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:37:01.265834       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:37:01.265990       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:37:01.266018       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:37:01.266083       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:37:01.266102       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:37:11.258365       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:37:11.258506       1 main.go:303] handling current node
	I0717 17:37:11.258539       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:37:11.258597       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:37:11.258823       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:37:11.258876       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:37:11.258985       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:37:11.259023       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [dbb0842f9354fc3963cae2902decd174b028cb857227fdf23844b2da6a7c01ac] <==
	I0717 17:29:53.273012       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 17:29:53.304033       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 17:29:53.322260       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 17:30:05.278733       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0717 17:30:05.278733       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0717 17:30:05.829969       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0717 17:33:49.175346       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37834: use of closed network connection
	E0717 17:33:49.363385       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37856: use of closed network connection
	E0717 17:33:49.556989       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37866: use of closed network connection
	E0717 17:33:49.748583       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37896: use of closed network connection
	E0717 17:33:49.918003       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37904: use of closed network connection
	E0717 17:33:50.083798       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37908: use of closed network connection
	E0717 17:33:50.253740       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37926: use of closed network connection
	E0717 17:33:50.415077       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37950: use of closed network connection
	E0717 17:33:50.869201       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37996: use of closed network connection
	E0717 17:33:51.029232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38008: use of closed network connection
	E0717 17:33:51.205004       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38026: use of closed network connection
	E0717 17:33:51.367737       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38052: use of closed network connection
	E0717 17:33:51.543184       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38072: use of closed network connection
	E0717 17:33:51.723941       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38082: use of closed network connection
	I0717 17:34:23.453636       1 trace.go:236] Trace[937411472]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:288ee792-0cd5-4997-aded-811d44e718b5,client:192.168.39.100,api-group:apps,api-version:v1,name:kube-proxy,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:daemonsets,scope:resource,url:/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy/status,user-agent:kube-controller-manager/v1.30.2 (linux/amd64) kubernetes/3968350/system:serviceaccount:kube-system:daemon-set-controller,verb:PUT (17-Jul-2024 17:34:22.922) (total time: 531ms):
	Trace[937411472]: ["GuaranteedUpdate etcd3" audit-id:288ee792-0cd5-4997-aded-811d44e718b5,key:/daemonsets/kube-system/kube-proxy,type:*apps.DaemonSet,resource:daemonsets.apps 531ms (17:34:22.922)
	Trace[937411472]:  ---"Txn call completed" 527ms (17:34:23.452)]
	Trace[937411472]: [531.584649ms] [531.584649ms] END
	W0717 17:35:11.709044       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.187]
	
	
	==> kube-controller-manager [9880796029aa2ee7897660b3ccd40a039526e26c4b0208d087876a8ed4a6e3dd] <==
	I0717 17:33:16.618143       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-174628-m03" podCIDRs=["10.244.2.0/24"]
	I0717 17:33:20.259471       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-174628-m03"
	I0717 17:33:44.738335       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.540453ms"
	I0717 17:33:44.842023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.604384ms"
	I0717 17:33:45.030222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="188.053637ms"
	I0717 17:33:45.068802       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.998741ms"
	I0717 17:33:45.068894       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.03µs"
	I0717 17:33:45.072815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.735µs"
	I0717 17:33:45.092123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.091µs"
	I0717 17:33:45.250623       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.609533ms"
	I0717 17:33:45.250972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="165.428µs"
	I0717 17:33:46.533183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.696µs"
	I0717 17:33:48.145337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.446596ms"
	I0717 17:33:48.146334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.802µs"
	I0717 17:33:48.243381       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.536655ms"
	I0717 17:33:48.243470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.778µs"
	I0717 17:33:48.776597       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.216202ms"
	I0717 17:33:48.776849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.859µs"
	I0717 17:34:17.911538       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-174628-m04\" does not exist"
	I0717 17:34:17.945389       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-174628-m04" podCIDRs=["10.244.3.0/24"]
	I0717 17:34:20.289197       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-174628-m04"
	I0717 17:34:36.236310       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174628-m04"
	I0717 17:35:29.796271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174628-m04"
	I0717 17:35:29.997017       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.571717ms"
	I0717 17:35:29.997101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.06µs"
	
	
	==> kube-proxy [d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78] <==
	I0717 17:30:06.937763       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:30:06.958644       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0717 17:30:06.996584       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:30:06.996651       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:30:06.996751       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:30:06.999933       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:30:07.000394       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:30:07.000419       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:30:07.002402       1 config.go:192] "Starting service config controller"
	I0717 17:30:07.002636       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:30:07.002722       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:30:07.002729       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:30:07.003788       1 config.go:319] "Starting node config controller"
	I0717 17:30:07.003809       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:30:07.103596       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:30:07.103622       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:30:07.103965       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9] <==
	W0717 17:29:50.057042       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:29:50.057060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:29:50.057234       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:29:50.057245       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 17:29:50.058801       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:29:50.059137       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:29:50.909083       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 17:29:50.909138       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 17:29:50.973090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 17:29:50.973186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 17:29:51.051258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:29:51.051559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:29:51.057052       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 17:29:51.057213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 17:29:51.212147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 17:29:51.212308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 17:29:51.220191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:29:51.220590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:29:51.576445       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:29:51.576537       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 17:29:53.526765       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 17:34:18.004216       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pt58p\": pod kindnet-pt58p is already assigned to node \"ha-174628-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pt58p" node="ha-174628-m04"
	E0717 17:34:18.005897       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ce812d5f-7672-4d13-ab08-9a75c9507d83(kube-system/kindnet-pt58p) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pt58p"
	E0717 17:34:18.005978       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pt58p\": pod kindnet-pt58p is already assigned to node \"ha-174628-m04\"" pod="kube-system/kindnet-pt58p"
	I0717 17:34:18.006011       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pt58p" node="ha-174628-m04"
	
	
	==> kubelet <==
	Jul 17 17:32:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:32:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:32:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:33:44 ha-174628 kubelet[1358]: I0717 17:33:44.738841    1358 topology_manager.go:215] "Topology Admit Handler" podUID="fe9c4738-6334-4fc5-b8a3-dc249512fa0a" podNamespace="default" podName="busybox-fc5497c4f-8zv26"
	Jul 17 17:33:44 ha-174628 kubelet[1358]: I0717 17:33:44.887955    1358 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgfct\" (UniqueName: \"kubernetes.io/projected/fe9c4738-6334-4fc5-b8a3-dc249512fa0a-kube-api-access-fgfct\") pod \"busybox-fc5497c4f-8zv26\" (UID: \"fe9c4738-6334-4fc5-b8a3-dc249512fa0a\") " pod="default/busybox-fc5497c4f-8zv26"
	Jul 17 17:33:53 ha-174628 kubelet[1358]: E0717 17:33:53.208504    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:33:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:33:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:33:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:33:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:34:53 ha-174628 kubelet[1358]: E0717 17:34:53.208997    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:34:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:34:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:34:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:34:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:35:53 ha-174628 kubelet[1358]: E0717 17:35:53.208148    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:35:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:35:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:35:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:35:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:36:53 ha-174628 kubelet[1358]: E0717 17:36:53.209167    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:36:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:36:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:36:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:36:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174628 -n ha-174628
helpers_test.go:261: (dbg) Run:  kubectl --context ha-174628 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (57.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr: exit status 3 (3.186411642s)

                                                
                                                
-- stdout --
	ha-174628
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174628-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:37:21.541212   37801 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:37:21.541450   37801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:37:21.541459   37801 out.go:304] Setting ErrFile to fd 2...
	I0717 17:37:21.541467   37801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:37:21.541625   37801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:37:21.541786   37801 out.go:298] Setting JSON to false
	I0717 17:37:21.541815   37801 mustload.go:65] Loading cluster: ha-174628
	I0717 17:37:21.541866   37801 notify.go:220] Checking for updates...
	I0717 17:37:21.542381   37801 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:37:21.542402   37801 status.go:255] checking status of ha-174628 ...
	I0717 17:37:21.542878   37801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:21.542928   37801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:21.562963   37801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
	I0717 17:37:21.563496   37801 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:21.564181   37801 main.go:141] libmachine: Using API Version  1
	I0717 17:37:21.564209   37801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:21.564605   37801 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:21.564841   37801 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:37:21.566735   37801 status.go:330] ha-174628 host status = "Running" (err=<nil>)
	I0717 17:37:21.566754   37801 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:37:21.567033   37801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:21.567067   37801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:21.583075   37801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0717 17:37:21.583463   37801 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:21.583941   37801 main.go:141] libmachine: Using API Version  1
	I0717 17:37:21.583961   37801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:21.584353   37801 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:21.584553   37801 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:37:21.587447   37801 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:21.587804   37801 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:37:21.587824   37801 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:21.587926   37801 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:37:21.588212   37801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:21.588242   37801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:21.603043   37801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35579
	I0717 17:37:21.603450   37801 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:21.603982   37801 main.go:141] libmachine: Using API Version  1
	I0717 17:37:21.604005   37801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:21.604357   37801 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:21.604535   37801 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:37:21.604744   37801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:21.604779   37801 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:37:21.607409   37801 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:21.607823   37801 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:37:21.607858   37801 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:21.607934   37801 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:37:21.608097   37801 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:37:21.608265   37801 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:37:21.608382   37801 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:37:21.687879   37801 ssh_runner.go:195] Run: systemctl --version
	I0717 17:37:21.693502   37801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:21.708688   37801 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:37:21.708712   37801 api_server.go:166] Checking apiserver status ...
	I0717 17:37:21.708746   37801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:37:21.722498   37801 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup
	W0717 17:37:21.732387   37801 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:37:21.732430   37801 ssh_runner.go:195] Run: ls
	I0717 17:37:21.738839   37801 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:37:21.743100   37801 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:37:21.743122   37801 status.go:422] ha-174628 apiserver status = Running (err=<nil>)
	I0717 17:37:21.743135   37801 status.go:257] ha-174628 status: &{Name:ha-174628 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:37:21.743158   37801 status.go:255] checking status of ha-174628-m02 ...
	I0717 17:37:21.743465   37801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:21.743509   37801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:21.758661   37801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39557
	I0717 17:37:21.759083   37801 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:21.759564   37801 main.go:141] libmachine: Using API Version  1
	I0717 17:37:21.759588   37801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:21.759894   37801 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:21.760060   37801 main.go:141] libmachine: (ha-174628-m02) Calling .GetState
	I0717 17:37:21.761549   37801 status.go:330] ha-174628-m02 host status = "Running" (err=<nil>)
	I0717 17:37:21.761562   37801 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:37:21.761841   37801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:21.761887   37801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:21.776259   37801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
	I0717 17:37:21.776674   37801 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:21.777147   37801 main.go:141] libmachine: Using API Version  1
	I0717 17:37:21.777191   37801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:21.777528   37801 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:21.777808   37801 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:37:21.780636   37801 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:21.781100   37801 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:37:21.781120   37801 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:21.781290   37801 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:37:21.781756   37801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:21.781802   37801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:21.797170   37801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
	I0717 17:37:21.797522   37801 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:21.797916   37801 main.go:141] libmachine: Using API Version  1
	I0717 17:37:21.797941   37801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:21.798188   37801 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:21.798364   37801 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:37:21.798595   37801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:21.798615   37801 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:37:21.801395   37801 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:21.801793   37801 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:37:21.801820   37801 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:21.801951   37801 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:37:21.802145   37801 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:37:21.802294   37801 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:37:21.802410   37801 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	W0717 17:37:24.353284   37801 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.97:22: connect: no route to host
	W0717 17:37:24.353406   37801 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	E0717 17:37:24.353440   37801 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:24.353451   37801 status.go:257] ha-174628-m02 status: &{Name:ha-174628-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 17:37:24.353477   37801 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:24.353498   37801 status.go:255] checking status of ha-174628-m03 ...
	I0717 17:37:24.353831   37801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:24.353877   37801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:24.368465   37801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42109
	I0717 17:37:24.368867   37801 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:24.369307   37801 main.go:141] libmachine: Using API Version  1
	I0717 17:37:24.369329   37801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:24.369639   37801 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:24.369880   37801 main.go:141] libmachine: (ha-174628-m03) Calling .GetState
	I0717 17:37:24.371397   37801 status.go:330] ha-174628-m03 host status = "Running" (err=<nil>)
	I0717 17:37:24.371409   37801 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:24.371706   37801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:24.371741   37801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:24.386175   37801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0717 17:37:24.386670   37801 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:24.387137   37801 main.go:141] libmachine: Using API Version  1
	I0717 17:37:24.387152   37801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:24.387532   37801 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:24.387746   37801 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:37:24.390889   37801 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:24.391377   37801 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:24.391405   37801 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:24.391538   37801 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:24.392034   37801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:24.392088   37801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:24.406505   37801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I0717 17:37:24.406874   37801 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:24.407294   37801 main.go:141] libmachine: Using API Version  1
	I0717 17:37:24.407312   37801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:24.407585   37801 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:24.407759   37801 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:37:24.407957   37801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:24.407977   37801 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:37:24.410929   37801 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:24.411336   37801 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:24.411370   37801 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:24.411492   37801 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:37:24.411652   37801 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:37:24.411807   37801 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:37:24.412086   37801 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:37:24.491996   37801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:24.505599   37801 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:37:24.505632   37801 api_server.go:166] Checking apiserver status ...
	I0717 17:37:24.505671   37801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:37:24.518253   37801 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0717 17:37:24.526358   37801 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:37:24.526395   37801 ssh_runner.go:195] Run: ls
	I0717 17:37:24.530358   37801 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:37:24.535924   37801 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:37:24.535943   37801 status.go:422] ha-174628-m03 apiserver status = Running (err=<nil>)
	I0717 17:37:24.535951   37801 status.go:257] ha-174628-m03 status: &{Name:ha-174628-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:37:24.535963   37801 status.go:255] checking status of ha-174628-m04 ...
	I0717 17:37:24.536240   37801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:24.536278   37801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:24.551650   37801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37633
	I0717 17:37:24.552100   37801 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:24.552633   37801 main.go:141] libmachine: Using API Version  1
	I0717 17:37:24.552655   37801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:24.552964   37801 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:24.553148   37801 main.go:141] libmachine: (ha-174628-m04) Calling .GetState
	I0717 17:37:24.554918   37801 status.go:330] ha-174628-m04 host status = "Running" (err=<nil>)
	I0717 17:37:24.554933   37801 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:24.555211   37801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:24.555240   37801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:24.570810   37801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0717 17:37:24.571138   37801 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:24.571648   37801 main.go:141] libmachine: Using API Version  1
	I0717 17:37:24.571665   37801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:24.571920   37801 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:24.572087   37801 main.go:141] libmachine: (ha-174628-m04) Calling .GetIP
	I0717 17:37:24.575002   37801 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:24.575381   37801 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:24.575420   37801 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:24.575501   37801 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:24.575842   37801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:24.575880   37801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:24.589949   37801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38725
	I0717 17:37:24.590374   37801 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:24.590796   37801 main.go:141] libmachine: Using API Version  1
	I0717 17:37:24.590841   37801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:24.591110   37801 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:24.591290   37801 main.go:141] libmachine: (ha-174628-m04) Calling .DriverName
	I0717 17:37:24.591472   37801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:24.591494   37801 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHHostname
	I0717 17:37:24.593861   37801 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:24.594233   37801 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:24.594266   37801 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:24.594347   37801 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHPort
	I0717 17:37:24.594491   37801 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHKeyPath
	I0717 17:37:24.594626   37801 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHUsername
	I0717 17:37:24.594843   37801 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m04/id_rsa Username:docker}
	I0717 17:37:24.675709   37801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:24.688464   37801 status.go:257] ha-174628-m04 status: &{Name:ha-174628-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr: exit status 3 (5.339606118s)

                                                
                                                
-- stdout --
	ha-174628
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174628-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:37:25.548524   37885 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:37:25.548768   37885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:37:25.548778   37885 out.go:304] Setting ErrFile to fd 2...
	I0717 17:37:25.548782   37885 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:37:25.549041   37885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:37:25.549242   37885 out.go:298] Setting JSON to false
	I0717 17:37:25.549271   37885 mustload.go:65] Loading cluster: ha-174628
	I0717 17:37:25.549326   37885 notify.go:220] Checking for updates...
	I0717 17:37:25.549703   37885 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:37:25.549718   37885 status.go:255] checking status of ha-174628 ...
	I0717 17:37:25.550135   37885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:25.550194   37885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:25.569901   37885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0717 17:37:25.570296   37885 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:25.570911   37885 main.go:141] libmachine: Using API Version  1
	I0717 17:37:25.570931   37885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:25.571309   37885 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:25.571502   37885 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:37:25.573092   37885 status.go:330] ha-174628 host status = "Running" (err=<nil>)
	I0717 17:37:25.573108   37885 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:37:25.573415   37885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:25.573449   37885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:25.588587   37885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38021
	I0717 17:37:25.589002   37885 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:25.589474   37885 main.go:141] libmachine: Using API Version  1
	I0717 17:37:25.589492   37885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:25.589789   37885 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:25.589945   37885 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:37:25.592735   37885 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:25.593168   37885 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:37:25.593190   37885 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:25.593325   37885 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:37:25.593641   37885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:25.593676   37885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:25.608705   37885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44805
	I0717 17:37:25.609108   37885 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:25.609562   37885 main.go:141] libmachine: Using API Version  1
	I0717 17:37:25.609582   37885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:25.609843   37885 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:25.610010   37885 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:37:25.610165   37885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:25.610187   37885 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:37:25.612994   37885 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:25.613446   37885 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:37:25.613477   37885 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:25.613605   37885 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:37:25.613760   37885 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:37:25.613914   37885 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:37:25.614037   37885 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:37:25.691956   37885 ssh_runner.go:195] Run: systemctl --version
	I0717 17:37:25.698289   37885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:25.712618   37885 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:37:25.712642   37885 api_server.go:166] Checking apiserver status ...
	I0717 17:37:25.712677   37885 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:37:25.726628   37885 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup
	W0717 17:37:25.735743   37885 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:37:25.735805   37885 ssh_runner.go:195] Run: ls
	I0717 17:37:25.740862   37885 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:37:25.746717   37885 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:37:25.746736   37885 status.go:422] ha-174628 apiserver status = Running (err=<nil>)
	I0717 17:37:25.746744   37885 status.go:257] ha-174628 status: &{Name:ha-174628 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:37:25.746759   37885 status.go:255] checking status of ha-174628-m02 ...
	I0717 17:37:25.747026   37885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:25.747063   37885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:25.761918   37885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I0717 17:37:25.762309   37885 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:25.762740   37885 main.go:141] libmachine: Using API Version  1
	I0717 17:37:25.762759   37885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:25.763054   37885 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:25.763220   37885 main.go:141] libmachine: (ha-174628-m02) Calling .GetState
	I0717 17:37:25.764607   37885 status.go:330] ha-174628-m02 host status = "Running" (err=<nil>)
	I0717 17:37:25.764623   37885 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:37:25.764919   37885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:25.764968   37885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:25.778838   37885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41745
	I0717 17:37:25.779217   37885 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:25.779725   37885 main.go:141] libmachine: Using API Version  1
	I0717 17:37:25.779741   37885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:25.780076   37885 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:25.780267   37885 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:37:25.782629   37885 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:25.783094   37885 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:37:25.783114   37885 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:25.783335   37885 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:37:25.783617   37885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:25.783664   37885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:25.798621   37885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
	I0717 17:37:25.799049   37885 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:25.799483   37885 main.go:141] libmachine: Using API Version  1
	I0717 17:37:25.799503   37885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:25.799828   37885 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:25.800023   37885 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:37:25.800205   37885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:25.800222   37885 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:37:25.803103   37885 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:25.803555   37885 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:37:25.803582   37885 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:25.803729   37885 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:37:25.803899   37885 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:37:25.804037   37885 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:37:25.804140   37885 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	W0717 17:37:27.429266   37885 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:27.429323   37885 retry.go:31] will retry after 348.271972ms: dial tcp 192.168.39.97:22: connect: no route to host
	W0717 17:37:30.497319   37885 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.97:22: connect: no route to host
	W0717 17:37:30.497434   37885 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	E0717 17:37:30.497454   37885 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:30.497462   37885 status.go:257] ha-174628-m02 status: &{Name:ha-174628-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 17:37:30.497478   37885 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:30.497485   37885 status.go:255] checking status of ha-174628-m03 ...
	I0717 17:37:30.497790   37885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:30.497829   37885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:30.513938   37885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I0717 17:37:30.514358   37885 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:30.514877   37885 main.go:141] libmachine: Using API Version  1
	I0717 17:37:30.514901   37885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:30.515206   37885 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:30.515382   37885 main.go:141] libmachine: (ha-174628-m03) Calling .GetState
	I0717 17:37:30.516894   37885 status.go:330] ha-174628-m03 host status = "Running" (err=<nil>)
	I0717 17:37:30.516910   37885 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:30.517319   37885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:30.517360   37885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:30.533067   37885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46325
	I0717 17:37:30.533496   37885 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:30.533968   37885 main.go:141] libmachine: Using API Version  1
	I0717 17:37:30.533991   37885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:30.534291   37885 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:30.534452   37885 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:37:30.537143   37885 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:30.537555   37885 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:30.537589   37885 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:30.537721   37885 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:30.538044   37885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:30.538080   37885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:30.553786   37885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37445
	I0717 17:37:30.554168   37885 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:30.554674   37885 main.go:141] libmachine: Using API Version  1
	I0717 17:37:30.554692   37885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:30.554952   37885 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:30.555143   37885 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:37:30.555365   37885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:30.555386   37885 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:37:30.558409   37885 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:30.558888   37885 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:30.558913   37885 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:30.559077   37885 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:37:30.559229   37885 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:37:30.559366   37885 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:37:30.559481   37885 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:37:30.640691   37885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:30.654661   37885 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:37:30.654684   37885 api_server.go:166] Checking apiserver status ...
	I0717 17:37:30.654713   37885 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:37:30.668297   37885 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0717 17:37:30.677750   37885 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:37:30.677808   37885 ssh_runner.go:195] Run: ls
	I0717 17:37:30.681744   37885 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:37:30.686073   37885 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:37:30.686093   37885 status.go:422] ha-174628-m03 apiserver status = Running (err=<nil>)
	I0717 17:37:30.686100   37885 status.go:257] ha-174628-m03 status: &{Name:ha-174628-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:37:30.686113   37885 status.go:255] checking status of ha-174628-m04 ...
	I0717 17:37:30.686394   37885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:30.686424   37885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:30.702238   37885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
	I0717 17:37:30.702683   37885 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:30.703172   37885 main.go:141] libmachine: Using API Version  1
	I0717 17:37:30.703190   37885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:30.703525   37885 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:30.703744   37885 main.go:141] libmachine: (ha-174628-m04) Calling .GetState
	I0717 17:37:30.705246   37885 status.go:330] ha-174628-m04 host status = "Running" (err=<nil>)
	I0717 17:37:30.705263   37885 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:30.705572   37885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:30.705602   37885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:30.721995   37885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I0717 17:37:30.722507   37885 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:30.722936   37885 main.go:141] libmachine: Using API Version  1
	I0717 17:37:30.722958   37885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:30.723310   37885 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:30.723526   37885 main.go:141] libmachine: (ha-174628-m04) Calling .GetIP
	I0717 17:37:30.726229   37885 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:30.726630   37885 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:30.726654   37885 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:30.726725   37885 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:30.726988   37885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:30.727018   37885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:30.742253   37885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40925
	I0717 17:37:30.742744   37885 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:30.743275   37885 main.go:141] libmachine: Using API Version  1
	I0717 17:37:30.743295   37885 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:30.743633   37885 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:30.743809   37885 main.go:141] libmachine: (ha-174628-m04) Calling .DriverName
	I0717 17:37:30.744006   37885 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:30.744029   37885 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHHostname
	I0717 17:37:30.746859   37885 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:30.747276   37885 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:30.747303   37885 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:30.747445   37885 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHPort
	I0717 17:37:30.747593   37885 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHKeyPath
	I0717 17:37:30.747735   37885 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHUsername
	I0717 17:37:30.747849   37885 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m04/id_rsa Username:docker}
	I0717 17:37:30.831859   37885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:30.846942   37885 status.go:257] ha-174628-m04 status: &{Name:ha-174628-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr: exit status 3 (4.758693635s)

                                                
                                                
-- stdout --
	ha-174628
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174628-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:37:32.560281   38001 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:37:32.560388   38001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:37:32.560394   38001 out.go:304] Setting ErrFile to fd 2...
	I0717 17:37:32.560407   38001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:37:32.560962   38001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:37:32.561446   38001 out.go:298] Setting JSON to false
	I0717 17:37:32.561487   38001 mustload.go:65] Loading cluster: ha-174628
	I0717 17:37:32.561586   38001 notify.go:220] Checking for updates...
	I0717 17:37:32.562040   38001 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:37:32.562066   38001 status.go:255] checking status of ha-174628 ...
	I0717 17:37:32.562651   38001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:32.562694   38001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:32.578390   38001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40469
	I0717 17:37:32.578789   38001 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:32.579324   38001 main.go:141] libmachine: Using API Version  1
	I0717 17:37:32.579347   38001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:32.579773   38001 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:32.579979   38001 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:37:32.581635   38001 status.go:330] ha-174628 host status = "Running" (err=<nil>)
	I0717 17:37:32.581654   38001 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:37:32.581965   38001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:32.582014   38001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:32.596266   38001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43337
	I0717 17:37:32.596575   38001 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:32.597015   38001 main.go:141] libmachine: Using API Version  1
	I0717 17:37:32.597036   38001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:32.597303   38001 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:32.597483   38001 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:37:32.599808   38001 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:32.600193   38001 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:37:32.600210   38001 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:32.600304   38001 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:37:32.600679   38001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:32.600730   38001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:32.614920   38001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0717 17:37:32.615256   38001 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:32.615640   38001 main.go:141] libmachine: Using API Version  1
	I0717 17:37:32.615660   38001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:32.615933   38001 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:32.616096   38001 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:37:32.616280   38001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:32.616316   38001 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:37:32.618738   38001 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:32.619183   38001 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:37:32.619214   38001 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:32.619451   38001 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:37:32.619634   38001 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:37:32.619804   38001 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:37:32.619965   38001 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:37:32.700292   38001 ssh_runner.go:195] Run: systemctl --version
	I0717 17:37:32.706844   38001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:32.721784   38001 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:37:32.721815   38001 api_server.go:166] Checking apiserver status ...
	I0717 17:37:32.721852   38001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:37:32.734696   38001 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup
	W0717 17:37:32.743087   38001 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:37:32.743145   38001 ssh_runner.go:195] Run: ls
	I0717 17:37:32.747604   38001 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:37:32.751788   38001 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:37:32.751810   38001 status.go:422] ha-174628 apiserver status = Running (err=<nil>)
	I0717 17:37:32.751819   38001 status.go:257] ha-174628 status: &{Name:ha-174628 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:37:32.751837   38001 status.go:255] checking status of ha-174628-m02 ...
	I0717 17:37:32.752149   38001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:32.752197   38001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:32.768295   38001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
	I0717 17:37:32.768660   38001 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:32.769139   38001 main.go:141] libmachine: Using API Version  1
	I0717 17:37:32.769158   38001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:32.769460   38001 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:32.769671   38001 main.go:141] libmachine: (ha-174628-m02) Calling .GetState
	I0717 17:37:32.771101   38001 status.go:330] ha-174628-m02 host status = "Running" (err=<nil>)
	I0717 17:37:32.771115   38001 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:37:32.771418   38001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:32.771449   38001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:32.785537   38001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39847
	I0717 17:37:32.785880   38001 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:32.786338   38001 main.go:141] libmachine: Using API Version  1
	I0717 17:37:32.786364   38001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:32.786686   38001 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:32.786868   38001 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:37:32.789458   38001 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:32.789893   38001 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:37:32.789919   38001 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:32.790002   38001 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:37:32.790270   38001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:32.790303   38001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:32.804235   38001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37703
	I0717 17:37:32.804634   38001 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:32.805146   38001 main.go:141] libmachine: Using API Version  1
	I0717 17:37:32.805164   38001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:32.805429   38001 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:32.805578   38001 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:37:32.805768   38001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:32.805788   38001 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:37:32.808165   38001 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:32.808524   38001 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:37:32.808554   38001 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:32.808718   38001 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:37:32.808869   38001 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:37:32.809014   38001 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:37:32.809156   38001 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	W0717 17:37:33.569199   38001 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:33.569270   38001 retry.go:31] will retry after 283.296611ms: dial tcp 192.168.39.97:22: connect: no route to host
	W0717 17:37:36.929246   38001 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.97:22: connect: no route to host
	W0717 17:37:36.929322   38001 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	E0717 17:37:36.929335   38001 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:36.929348   38001 status.go:257] ha-174628-m02 status: &{Name:ha-174628-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 17:37:36.929376   38001 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:36.929385   38001 status.go:255] checking status of ha-174628-m03 ...
	I0717 17:37:36.929822   38001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:36.929877   38001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:36.944341   38001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32993
	I0717 17:37:36.944793   38001 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:36.945266   38001 main.go:141] libmachine: Using API Version  1
	I0717 17:37:36.945286   38001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:36.945618   38001 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:36.945795   38001 main.go:141] libmachine: (ha-174628-m03) Calling .GetState
	I0717 17:37:36.947321   38001 status.go:330] ha-174628-m03 host status = "Running" (err=<nil>)
	I0717 17:37:36.947339   38001 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:36.947721   38001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:36.947765   38001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:36.961904   38001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36855
	I0717 17:37:36.962244   38001 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:36.962679   38001 main.go:141] libmachine: Using API Version  1
	I0717 17:37:36.962697   38001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:36.962971   38001 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:36.963128   38001 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:37:36.965724   38001 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:36.966161   38001 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:36.966185   38001 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:36.966295   38001 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:36.966636   38001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:36.966667   38001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:36.981038   38001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42357
	I0717 17:37:36.981368   38001 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:36.981765   38001 main.go:141] libmachine: Using API Version  1
	I0717 17:37:36.981783   38001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:36.982064   38001 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:36.982250   38001 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:37:36.982423   38001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:36.982443   38001 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:37:36.984715   38001 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:36.985160   38001 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:36.985188   38001 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:36.985309   38001 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:37:36.985467   38001 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:37:36.985598   38001 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:37:36.985696   38001 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:37:37.069082   38001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:37.084121   38001 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:37:37.084151   38001 api_server.go:166] Checking apiserver status ...
	I0717 17:37:37.084188   38001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:37:37.099050   38001 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0717 17:37:37.110575   38001 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:37:37.110626   38001 ssh_runner.go:195] Run: ls
	I0717 17:37:37.114927   38001 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:37:37.120501   38001 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:37:37.120522   38001 status.go:422] ha-174628-m03 apiserver status = Running (err=<nil>)
	I0717 17:37:37.120530   38001 status.go:257] ha-174628-m03 status: &{Name:ha-174628-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:37:37.120544   38001 status.go:255] checking status of ha-174628-m04 ...
	I0717 17:37:37.120842   38001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:37.120880   38001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:37.136804   38001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38979
	I0717 17:37:37.137203   38001 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:37.137628   38001 main.go:141] libmachine: Using API Version  1
	I0717 17:37:37.137751   38001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:37.138097   38001 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:37.138290   38001 main.go:141] libmachine: (ha-174628-m04) Calling .GetState
	I0717 17:37:37.139849   38001 status.go:330] ha-174628-m04 host status = "Running" (err=<nil>)
	I0717 17:37:37.139865   38001 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:37.140150   38001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:37.140186   38001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:37.154289   38001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
	I0717 17:37:37.154648   38001 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:37.155063   38001 main.go:141] libmachine: Using API Version  1
	I0717 17:37:37.155084   38001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:37.155372   38001 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:37.155535   38001 main.go:141] libmachine: (ha-174628-m04) Calling .GetIP
	I0717 17:37:37.158102   38001 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:37.158498   38001 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:37.158528   38001 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:37.158685   38001 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:37.158995   38001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:37.159032   38001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:37.172695   38001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44885
	I0717 17:37:37.173121   38001 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:37.173542   38001 main.go:141] libmachine: Using API Version  1
	I0717 17:37:37.173573   38001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:37.173886   38001 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:37.174043   38001 main.go:141] libmachine: (ha-174628-m04) Calling .DriverName
	I0717 17:37:37.174213   38001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:37.174232   38001 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHHostname
	I0717 17:37:37.177073   38001 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:37.177527   38001 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:37.177557   38001 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:37.177694   38001 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHPort
	I0717 17:37:37.177879   38001 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHKeyPath
	I0717 17:37:37.178037   38001 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHUsername
	I0717 17:37:37.178175   38001 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m04/id_rsa Username:docker}
	I0717 17:37:37.263675   38001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:37.277314   38001 status.go:257] ha-174628-m04 status: &{Name:ha-174628-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr: exit status 3 (4.433261815s)

                                                
                                                
-- stdout --
	ha-174628
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174628-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:37:39.320443   38102 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:37:39.320560   38102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:37:39.320569   38102 out.go:304] Setting ErrFile to fd 2...
	I0717 17:37:39.320573   38102 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:37:39.320749   38102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:37:39.320930   38102 out.go:298] Setting JSON to false
	I0717 17:37:39.320979   38102 mustload.go:65] Loading cluster: ha-174628
	I0717 17:37:39.321068   38102 notify.go:220] Checking for updates...
	I0717 17:37:39.321404   38102 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:37:39.321418   38102 status.go:255] checking status of ha-174628 ...
	I0717 17:37:39.321829   38102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:39.321880   38102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:39.340572   38102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42453
	I0717 17:37:39.340920   38102 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:39.341527   38102 main.go:141] libmachine: Using API Version  1
	I0717 17:37:39.341555   38102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:39.341868   38102 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:39.342048   38102 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:37:39.343519   38102 status.go:330] ha-174628 host status = "Running" (err=<nil>)
	I0717 17:37:39.343539   38102 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:37:39.343790   38102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:39.343828   38102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:39.358060   38102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38101
	I0717 17:37:39.358503   38102 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:39.358978   38102 main.go:141] libmachine: Using API Version  1
	I0717 17:37:39.359014   38102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:39.359433   38102 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:39.359643   38102 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:37:39.362443   38102 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:39.362852   38102 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:37:39.362879   38102 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:39.362966   38102 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:37:39.363263   38102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:39.363294   38102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:39.377633   38102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34461
	I0717 17:37:39.377983   38102 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:39.378440   38102 main.go:141] libmachine: Using API Version  1
	I0717 17:37:39.378463   38102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:39.378733   38102 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:39.378923   38102 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:37:39.379083   38102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:39.379103   38102 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:37:39.381571   38102 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:39.381953   38102 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:37:39.381981   38102 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:39.382100   38102 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:37:39.382269   38102 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:37:39.382402   38102 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:37:39.382687   38102 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:37:39.461996   38102 ssh_runner.go:195] Run: systemctl --version
	I0717 17:37:39.467801   38102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:39.483307   38102 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:37:39.483340   38102 api_server.go:166] Checking apiserver status ...
	I0717 17:37:39.483380   38102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:37:39.496575   38102 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup
	W0717 17:37:39.504878   38102 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:37:39.504919   38102 ssh_runner.go:195] Run: ls
	I0717 17:37:39.509071   38102 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:37:39.514660   38102 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:37:39.514680   38102 status.go:422] ha-174628 apiserver status = Running (err=<nil>)
	I0717 17:37:39.514692   38102 status.go:257] ha-174628 status: &{Name:ha-174628 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:37:39.514710   38102 status.go:255] checking status of ha-174628-m02 ...
	I0717 17:37:39.515127   38102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:39.515184   38102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:39.529671   38102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38893
	I0717 17:37:39.530048   38102 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:39.530481   38102 main.go:141] libmachine: Using API Version  1
	I0717 17:37:39.530498   38102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:39.530793   38102 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:39.530965   38102 main.go:141] libmachine: (ha-174628-m02) Calling .GetState
	I0717 17:37:39.532313   38102 status.go:330] ha-174628-m02 host status = "Running" (err=<nil>)
	I0717 17:37:39.532326   38102 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:37:39.532591   38102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:39.532623   38102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:39.547570   38102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41331
	I0717 17:37:39.547940   38102 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:39.548419   38102 main.go:141] libmachine: Using API Version  1
	I0717 17:37:39.548440   38102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:39.548782   38102 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:39.548959   38102 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:37:39.551954   38102 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:39.552505   38102 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:37:39.552534   38102 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:39.552796   38102 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:37:39.553163   38102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:39.553195   38102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:39.567621   38102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
	I0717 17:37:39.567934   38102 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:39.568425   38102 main.go:141] libmachine: Using API Version  1
	I0717 17:37:39.568454   38102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:39.568755   38102 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:39.568963   38102 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:37:39.569151   38102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:39.569173   38102 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:37:39.571815   38102 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:39.572251   38102 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:37:39.572275   38102 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:39.572329   38102 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:37:39.572454   38102 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:37:39.572597   38102 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:37:39.572693   38102 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	W0717 17:37:40.001152   38102 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:40.001205   38102 retry.go:31] will retry after 304.388714ms: dial tcp 192.168.39.97:22: connect: no route to host
	W0717 17:37:43.365183   38102 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.97:22: connect: no route to host
	W0717 17:37:43.365288   38102 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	E0717 17:37:43.365319   38102 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:43.365330   38102 status.go:257] ha-174628-m02 status: &{Name:ha-174628-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 17:37:43.365355   38102 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:43.365377   38102 status.go:255] checking status of ha-174628-m03 ...
	I0717 17:37:43.365839   38102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:43.365900   38102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:43.380266   38102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37669
	I0717 17:37:43.380717   38102 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:43.381176   38102 main.go:141] libmachine: Using API Version  1
	I0717 17:37:43.381200   38102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:43.381565   38102 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:43.381752   38102 main.go:141] libmachine: (ha-174628-m03) Calling .GetState
	I0717 17:37:43.383198   38102 status.go:330] ha-174628-m03 host status = "Running" (err=<nil>)
	I0717 17:37:43.383214   38102 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:43.383663   38102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:43.383714   38102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:43.398421   38102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0717 17:37:43.398788   38102 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:43.399246   38102 main.go:141] libmachine: Using API Version  1
	I0717 17:37:43.399268   38102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:43.399584   38102 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:43.399809   38102 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:37:43.402203   38102 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:43.402748   38102 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:43.402781   38102 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:43.402842   38102 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:43.403116   38102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:43.403151   38102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:43.418480   38102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
	I0717 17:37:43.418818   38102 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:43.419242   38102 main.go:141] libmachine: Using API Version  1
	I0717 17:37:43.419267   38102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:43.419601   38102 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:43.419789   38102 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:37:43.419971   38102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:43.419992   38102 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:37:43.422680   38102 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:43.423045   38102 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:43.423078   38102 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:43.423233   38102 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:37:43.423406   38102 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:37:43.423571   38102 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:37:43.423670   38102 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:37:43.505142   38102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:43.522208   38102 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:37:43.522234   38102 api_server.go:166] Checking apiserver status ...
	I0717 17:37:43.522270   38102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:37:43.535635   38102 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0717 17:37:43.545605   38102 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:37:43.545657   38102 ssh_runner.go:195] Run: ls
	I0717 17:37:43.549507   38102 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:37:43.555327   38102 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:37:43.555363   38102 status.go:422] ha-174628-m03 apiserver status = Running (err=<nil>)
	I0717 17:37:43.555374   38102 status.go:257] ha-174628-m03 status: &{Name:ha-174628-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:37:43.555395   38102 status.go:255] checking status of ha-174628-m04 ...
	I0717 17:37:43.555742   38102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:43.555776   38102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:43.571338   38102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33841
	I0717 17:37:43.571730   38102 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:43.572195   38102 main.go:141] libmachine: Using API Version  1
	I0717 17:37:43.572222   38102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:43.572530   38102 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:43.572720   38102 main.go:141] libmachine: (ha-174628-m04) Calling .GetState
	I0717 17:37:43.574130   38102 status.go:330] ha-174628-m04 host status = "Running" (err=<nil>)
	I0717 17:37:43.574145   38102 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:43.574431   38102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:43.574469   38102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:43.588717   38102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0717 17:37:43.589129   38102 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:43.589582   38102 main.go:141] libmachine: Using API Version  1
	I0717 17:37:43.589612   38102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:43.589942   38102 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:43.590119   38102 main.go:141] libmachine: (ha-174628-m04) Calling .GetIP
	I0717 17:37:43.592906   38102 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:43.593282   38102 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:43.593321   38102 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:43.593465   38102 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:43.593809   38102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:43.593862   38102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:43.608954   38102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0717 17:37:43.609341   38102 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:43.609859   38102 main.go:141] libmachine: Using API Version  1
	I0717 17:37:43.609885   38102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:43.610209   38102 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:43.610369   38102 main.go:141] libmachine: (ha-174628-m04) Calling .DriverName
	I0717 17:37:43.610540   38102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:43.610560   38102 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHHostname
	I0717 17:37:43.612860   38102 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:43.613273   38102 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:43.613299   38102 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:43.613403   38102 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHPort
	I0717 17:37:43.613534   38102 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHKeyPath
	I0717 17:37:43.613660   38102 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHUsername
	I0717 17:37:43.613771   38102 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m04/id_rsa Username:docker}
	I0717 17:37:43.700089   38102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:43.713530   38102 status.go:257] ha-174628-m04 status: &{Name:ha-174628-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr: exit status 3 (3.718388324s)

                                                
                                                
-- stdout --
	ha-174628
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174628-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:37:48.195212   38218 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:37:48.195508   38218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:37:48.195519   38218 out.go:304] Setting ErrFile to fd 2...
	I0717 17:37:48.195524   38218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:37:48.195742   38218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:37:48.195916   38218 out.go:298] Setting JSON to false
	I0717 17:37:48.195957   38218 mustload.go:65] Loading cluster: ha-174628
	I0717 17:37:48.195999   38218 notify.go:220] Checking for updates...
	I0717 17:37:48.196348   38218 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:37:48.196365   38218 status.go:255] checking status of ha-174628 ...
	I0717 17:37:48.196738   38218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:48.196798   38218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:48.215649   38218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46319
	I0717 17:37:48.216041   38218 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:48.216596   38218 main.go:141] libmachine: Using API Version  1
	I0717 17:37:48.216614   38218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:48.216981   38218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:48.217178   38218 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:37:48.218736   38218 status.go:330] ha-174628 host status = "Running" (err=<nil>)
	I0717 17:37:48.218751   38218 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:37:48.219017   38218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:48.219052   38218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:48.233193   38218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0717 17:37:48.233592   38218 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:48.234030   38218 main.go:141] libmachine: Using API Version  1
	I0717 17:37:48.234049   38218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:48.234438   38218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:48.234618   38218 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:37:48.237695   38218 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:48.238097   38218 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:37:48.238129   38218 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:48.238249   38218 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:37:48.238614   38218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:48.238654   38218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:48.253293   38218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I0717 17:37:48.253696   38218 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:48.254121   38218 main.go:141] libmachine: Using API Version  1
	I0717 17:37:48.254149   38218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:48.254425   38218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:48.254582   38218 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:37:48.254767   38218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:48.254787   38218 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:37:48.257578   38218 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:48.258011   38218 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:37:48.258035   38218 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:48.258181   38218 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:37:48.258333   38218 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:37:48.258480   38218 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:37:48.258620   38218 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:37:48.336272   38218 ssh_runner.go:195] Run: systemctl --version
	I0717 17:37:48.342342   38218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:48.356763   38218 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:37:48.356789   38218 api_server.go:166] Checking apiserver status ...
	I0717 17:37:48.356827   38218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:37:48.370589   38218 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup
	W0717 17:37:48.381504   38218 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:37:48.381577   38218 ssh_runner.go:195] Run: ls
	I0717 17:37:48.385789   38218 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:37:48.390292   38218 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:37:48.390312   38218 status.go:422] ha-174628 apiserver status = Running (err=<nil>)
	I0717 17:37:48.390320   38218 status.go:257] ha-174628 status: &{Name:ha-174628 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:37:48.390335   38218 status.go:255] checking status of ha-174628-m02 ...
	I0717 17:37:48.390609   38218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:48.390640   38218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:48.405194   38218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0717 17:37:48.405589   38218 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:48.406067   38218 main.go:141] libmachine: Using API Version  1
	I0717 17:37:48.406089   38218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:48.406399   38218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:48.406625   38218 main.go:141] libmachine: (ha-174628-m02) Calling .GetState
	I0717 17:37:48.408168   38218 status.go:330] ha-174628-m02 host status = "Running" (err=<nil>)
	I0717 17:37:48.408185   38218 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:37:48.408484   38218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:48.408524   38218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:48.423167   38218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0717 17:37:48.423526   38218 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:48.424028   38218 main.go:141] libmachine: Using API Version  1
	I0717 17:37:48.424048   38218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:48.424404   38218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:48.424602   38218 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:37:48.427305   38218 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:48.427772   38218 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:37:48.427794   38218 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:48.427969   38218 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:37:48.428271   38218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:48.428332   38218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:48.442770   38218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35985
	I0717 17:37:48.443174   38218 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:48.443612   38218 main.go:141] libmachine: Using API Version  1
	I0717 17:37:48.443628   38218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:48.443934   38218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:48.444106   38218 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:37:48.444278   38218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:48.444296   38218 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:37:48.447203   38218 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:48.447676   38218 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:37:48.447713   38218 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:48.447830   38218 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:37:48.448003   38218 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:37:48.448157   38218 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:37:48.448254   38218 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	W0717 17:37:51.521225   38218 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.97:22: connect: no route to host
	W0717 17:37:51.521327   38218 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	E0717 17:37:51.521351   38218 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:51.521362   38218 status.go:257] ha-174628-m02 status: &{Name:ha-174628-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 17:37:51.521380   38218 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:51.521406   38218 status.go:255] checking status of ha-174628-m03 ...
	I0717 17:37:51.521708   38218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:51.521752   38218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:51.536310   38218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42049
	I0717 17:37:51.536737   38218 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:51.537193   38218 main.go:141] libmachine: Using API Version  1
	I0717 17:37:51.537215   38218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:51.537505   38218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:51.537692   38218 main.go:141] libmachine: (ha-174628-m03) Calling .GetState
	I0717 17:37:51.539190   38218 status.go:330] ha-174628-m03 host status = "Running" (err=<nil>)
	I0717 17:37:51.539206   38218 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:51.539478   38218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:51.539512   38218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:51.553885   38218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40997
	I0717 17:37:51.554323   38218 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:51.554785   38218 main.go:141] libmachine: Using API Version  1
	I0717 17:37:51.554804   38218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:51.555097   38218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:51.555280   38218 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:37:51.558072   38218 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:51.558583   38218 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:51.558616   38218 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:51.558764   38218 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:51.559057   38218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:51.559091   38218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:51.573952   38218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I0717 17:37:51.574307   38218 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:51.574814   38218 main.go:141] libmachine: Using API Version  1
	I0717 17:37:51.574830   38218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:51.575130   38218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:51.575323   38218 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:37:51.575508   38218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:51.575526   38218 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:37:51.578197   38218 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:51.578618   38218 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:51.578645   38218 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:51.578746   38218 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:37:51.578892   38218 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:37:51.579041   38218 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:37:51.579166   38218 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:37:51.660702   38218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:51.675765   38218 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:37:51.675792   38218 api_server.go:166] Checking apiserver status ...
	I0717 17:37:51.675823   38218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:37:51.693329   38218 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0717 17:37:51.702635   38218 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:37:51.702735   38218 ssh_runner.go:195] Run: ls
	I0717 17:37:51.707761   38218 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:37:51.713731   38218 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:37:51.713753   38218 status.go:422] ha-174628-m03 apiserver status = Running (err=<nil>)
	I0717 17:37:51.713763   38218 status.go:257] ha-174628-m03 status: &{Name:ha-174628-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:37:51.713779   38218 status.go:255] checking status of ha-174628-m04 ...
	I0717 17:37:51.714073   38218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:51.714119   38218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:51.729553   38218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0717 17:37:51.729992   38218 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:51.730499   38218 main.go:141] libmachine: Using API Version  1
	I0717 17:37:51.730523   38218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:51.730805   38218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:51.730952   38218 main.go:141] libmachine: (ha-174628-m04) Calling .GetState
	I0717 17:37:51.732613   38218 status.go:330] ha-174628-m04 host status = "Running" (err=<nil>)
	I0717 17:37:51.732627   38218 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:51.733030   38218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:51.733075   38218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:51.748161   38218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40681
	I0717 17:37:51.748629   38218 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:51.749092   38218 main.go:141] libmachine: Using API Version  1
	I0717 17:37:51.749114   38218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:51.749434   38218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:51.749610   38218 main.go:141] libmachine: (ha-174628-m04) Calling .GetIP
	I0717 17:37:51.752238   38218 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:51.752576   38218 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:51.752597   38218 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:51.752709   38218 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:51.753025   38218 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:51.753066   38218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:51.769101   38218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40595
	I0717 17:37:51.769591   38218 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:51.770074   38218 main.go:141] libmachine: Using API Version  1
	I0717 17:37:51.770097   38218 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:51.770510   38218 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:51.770694   38218 main.go:141] libmachine: (ha-174628-m04) Calling .DriverName
	I0717 17:37:51.770926   38218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:51.770952   38218 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHHostname
	I0717 17:37:51.773849   38218 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:51.774331   38218 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:51.774356   38218 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:51.774571   38218 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHPort
	I0717 17:37:51.774746   38218 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHKeyPath
	I0717 17:37:51.774974   38218 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHUsername
	I0717 17:37:51.775116   38218 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m04/id_rsa Username:docker}
	I0717 17:37:51.859401   38218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:51.872349   38218 status.go:257] ha-174628-m04 status: &{Name:ha-174628-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr: exit status 3 (3.709106991s)

                                                
                                                
-- stdout --
	ha-174628
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-174628-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:37:55.880256   38342 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:37:55.880373   38342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:37:55.880383   38342 out.go:304] Setting ErrFile to fd 2...
	I0717 17:37:55.880387   38342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:37:55.880563   38342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:37:55.880764   38342 out.go:298] Setting JSON to false
	I0717 17:37:55.880795   38342 mustload.go:65] Loading cluster: ha-174628
	I0717 17:37:55.880830   38342 notify.go:220] Checking for updates...
	I0717 17:37:55.882272   38342 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:37:55.882328   38342 status.go:255] checking status of ha-174628 ...
	I0717 17:37:55.882757   38342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:55.882791   38342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:55.897368   38342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41433
	I0717 17:37:55.897832   38342 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:55.898385   38342 main.go:141] libmachine: Using API Version  1
	I0717 17:37:55.898404   38342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:55.898873   38342 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:55.899076   38342 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:37:55.900682   38342 status.go:330] ha-174628 host status = "Running" (err=<nil>)
	I0717 17:37:55.900695   38342 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:37:55.901065   38342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:55.901108   38342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:55.915268   38342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43517
	I0717 17:37:55.915706   38342 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:55.916106   38342 main.go:141] libmachine: Using API Version  1
	I0717 17:37:55.916127   38342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:55.916437   38342 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:55.916569   38342 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:37:55.919156   38342 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:55.919632   38342 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:37:55.919662   38342 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:55.919739   38342 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:37:55.920011   38342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:55.920044   38342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:55.934268   38342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0717 17:37:55.934649   38342 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:55.935042   38342 main.go:141] libmachine: Using API Version  1
	I0717 17:37:55.935060   38342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:55.935371   38342 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:55.935552   38342 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:37:55.935741   38342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:55.935771   38342 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:37:55.938604   38342 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:55.939081   38342 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:37:55.939124   38342 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:37:55.939229   38342 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:37:55.939379   38342 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:37:55.939518   38342 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:37:55.939664   38342 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:37:56.016216   38342 ssh_runner.go:195] Run: systemctl --version
	I0717 17:37:56.021929   38342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:56.036433   38342 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:37:56.036458   38342 api_server.go:166] Checking apiserver status ...
	I0717 17:37:56.036498   38342 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:37:56.050776   38342 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup
	W0717 17:37:56.061243   38342 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:37:56.061305   38342 ssh_runner.go:195] Run: ls
	I0717 17:37:56.064975   38342 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:37:56.069016   38342 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:37:56.069035   38342 status.go:422] ha-174628 apiserver status = Running (err=<nil>)
	I0717 17:37:56.069044   38342 status.go:257] ha-174628 status: &{Name:ha-174628 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:37:56.069058   38342 status.go:255] checking status of ha-174628-m02 ...
	I0717 17:37:56.069347   38342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:56.069389   38342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:56.084134   38342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I0717 17:37:56.084578   38342 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:56.085059   38342 main.go:141] libmachine: Using API Version  1
	I0717 17:37:56.085078   38342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:56.085370   38342 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:56.085551   38342 main.go:141] libmachine: (ha-174628-m02) Calling .GetState
	I0717 17:37:56.087122   38342 status.go:330] ha-174628-m02 host status = "Running" (err=<nil>)
	I0717 17:37:56.087138   38342 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:37:56.087461   38342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:56.087491   38342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:56.101597   38342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37579
	I0717 17:37:56.101981   38342 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:56.102396   38342 main.go:141] libmachine: Using API Version  1
	I0717 17:37:56.102410   38342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:56.102676   38342 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:56.102847   38342 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:37:56.105464   38342 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:56.105877   38342 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:37:56.105900   38342 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:56.106066   38342 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:37:56.106374   38342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:56.106416   38342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:56.120437   38342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41809
	I0717 17:37:56.120796   38342 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:56.121258   38342 main.go:141] libmachine: Using API Version  1
	I0717 17:37:56.121278   38342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:56.121614   38342 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:56.121846   38342 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:37:56.122018   38342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:56.122038   38342 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:37:56.124644   38342 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:56.124954   38342 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:37:56.124987   38342 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:37:56.125247   38342 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:37:56.125404   38342 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:37:56.125522   38342 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:37:56.125669   38342 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	W0717 17:37:59.201251   38342 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.97:22: connect: no route to host
	W0717 17:37:59.201425   38342 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	E0717 17:37:59.201448   38342 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:59.201463   38342 status.go:257] ha-174628-m02 status: &{Name:ha-174628-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0717 17:37:59.201487   38342 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.97:22: connect: no route to host
	I0717 17:37:59.201501   38342 status.go:255] checking status of ha-174628-m03 ...
	I0717 17:37:59.201968   38342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:59.202031   38342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:59.217116   38342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
	I0717 17:37:59.217557   38342 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:59.218243   38342 main.go:141] libmachine: Using API Version  1
	I0717 17:37:59.218262   38342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:59.218528   38342 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:59.218739   38342 main.go:141] libmachine: (ha-174628-m03) Calling .GetState
	I0717 17:37:59.220506   38342 status.go:330] ha-174628-m03 host status = "Running" (err=<nil>)
	I0717 17:37:59.220519   38342 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:59.220788   38342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:59.220817   38342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:59.234872   38342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42053
	I0717 17:37:59.235288   38342 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:59.235748   38342 main.go:141] libmachine: Using API Version  1
	I0717 17:37:59.235766   38342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:59.236088   38342 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:59.236259   38342 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:37:59.239165   38342 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:59.239698   38342 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:59.239720   38342 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:59.239896   38342 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:37:59.240337   38342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:59.240381   38342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:59.255382   38342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
	I0717 17:37:59.255828   38342 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:59.256262   38342 main.go:141] libmachine: Using API Version  1
	I0717 17:37:59.256285   38342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:59.256626   38342 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:59.256838   38342 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:37:59.257041   38342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:59.257065   38342 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:37:59.260260   38342 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:59.260804   38342 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:37:59.260827   38342 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:37:59.260991   38342 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:37:59.261130   38342 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:37:59.261251   38342 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:37:59.261402   38342 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:37:59.340143   38342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:59.355005   38342 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:37:59.355034   38342 api_server.go:166] Checking apiserver status ...
	I0717 17:37:59.355080   38342 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:37:59.368129   38342 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0717 17:37:59.381576   38342 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:37:59.381622   38342 ssh_runner.go:195] Run: ls
	I0717 17:37:59.386125   38342 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:37:59.390300   38342 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:37:59.390331   38342 status.go:422] ha-174628-m03 apiserver status = Running (err=<nil>)
	I0717 17:37:59.390339   38342 status.go:257] ha-174628-m03 status: &{Name:ha-174628-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:37:59.390356   38342 status.go:255] checking status of ha-174628-m04 ...
	I0717 17:37:59.390631   38342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:59.390660   38342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:59.405325   38342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35481
	I0717 17:37:59.405787   38342 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:59.406242   38342 main.go:141] libmachine: Using API Version  1
	I0717 17:37:59.406262   38342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:59.406561   38342 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:59.406755   38342 main.go:141] libmachine: (ha-174628-m04) Calling .GetState
	I0717 17:37:59.408494   38342 status.go:330] ha-174628-m04 host status = "Running" (err=<nil>)
	I0717 17:37:59.408510   38342 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:59.408778   38342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:59.408817   38342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:59.423700   38342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40969
	I0717 17:37:59.424051   38342 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:59.424476   38342 main.go:141] libmachine: Using API Version  1
	I0717 17:37:59.424503   38342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:59.424790   38342 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:59.425021   38342 main.go:141] libmachine: (ha-174628-m04) Calling .GetIP
	I0717 17:37:59.427850   38342 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:59.428289   38342 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:59.428353   38342 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:59.428571   38342 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:37:59.428848   38342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:37:59.428882   38342 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:37:59.443392   38342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I0717 17:37:59.443864   38342 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:37:59.444328   38342 main.go:141] libmachine: Using API Version  1
	I0717 17:37:59.444349   38342 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:37:59.444709   38342 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:37:59.444887   38342 main.go:141] libmachine: (ha-174628-m04) Calling .DriverName
	I0717 17:37:59.445087   38342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:37:59.445104   38342 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHHostname
	I0717 17:37:59.447957   38342 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:59.448327   38342 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:37:59.448365   38342 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:37:59.448482   38342 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHPort
	I0717 17:37:59.448626   38342 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHKeyPath
	I0717 17:37:59.448792   38342 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHUsername
	I0717 17:37:59.448974   38342 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m04/id_rsa Username:docker}
	I0717 17:37:59.536604   38342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:37:59.549822   38342 status.go:257] ha-174628-m04 status: &{Name:ha-174628-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr: exit status 7 (611.502647ms)

                                                
                                                
-- stdout --
	ha-174628
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-174628-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:38:05.995579   38479 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:38:05.995693   38479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:38:05.995705   38479 out.go:304] Setting ErrFile to fd 2...
	I0717 17:38:05.995711   38479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:38:05.995920   38479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:38:05.996121   38479 out.go:298] Setting JSON to false
	I0717 17:38:05.996150   38479 mustload.go:65] Loading cluster: ha-174628
	I0717 17:38:05.996253   38479 notify.go:220] Checking for updates...
	I0717 17:38:05.996666   38479 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:38:05.996682   38479 status.go:255] checking status of ha-174628 ...
	I0717 17:38:05.997234   38479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:05.997274   38479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:06.012317   38479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41275
	I0717 17:38:06.012705   38479 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:06.013238   38479 main.go:141] libmachine: Using API Version  1
	I0717 17:38:06.013260   38479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:06.013682   38479 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:06.013910   38479 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:38:06.015625   38479 status.go:330] ha-174628 host status = "Running" (err=<nil>)
	I0717 17:38:06.015640   38479 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:38:06.016035   38479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:06.016088   38479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:06.030670   38479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34639
	I0717 17:38:06.031027   38479 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:06.031453   38479 main.go:141] libmachine: Using API Version  1
	I0717 17:38:06.031471   38479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:06.031833   38479 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:06.032000   38479 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:38:06.034785   38479 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:38:06.035294   38479 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:38:06.035321   38479 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:38:06.035471   38479 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:38:06.035830   38479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:06.035882   38479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:06.050895   38479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41451
	I0717 17:38:06.051332   38479 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:06.051787   38479 main.go:141] libmachine: Using API Version  1
	I0717 17:38:06.051809   38479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:06.052084   38479 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:06.052266   38479 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:38:06.052484   38479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:38:06.052521   38479 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:38:06.055181   38479 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:38:06.055673   38479 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:38:06.055709   38479 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:38:06.055825   38479 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:38:06.056010   38479 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:38:06.056149   38479 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:38:06.056282   38479 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:38:06.141866   38479 ssh_runner.go:195] Run: systemctl --version
	I0717 17:38:06.147623   38479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:38:06.164115   38479 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:38:06.164138   38479 api_server.go:166] Checking apiserver status ...
	I0717 17:38:06.164168   38479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:38:06.178020   38479 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup
	W0717 17:38:06.187138   38479 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:38:06.187181   38479 ssh_runner.go:195] Run: ls
	I0717 17:38:06.191177   38479 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:38:06.195419   38479 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:38:06.195437   38479 status.go:422] ha-174628 apiserver status = Running (err=<nil>)
	I0717 17:38:06.195445   38479 status.go:257] ha-174628 status: &{Name:ha-174628 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:38:06.195470   38479 status.go:255] checking status of ha-174628-m02 ...
	I0717 17:38:06.195740   38479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:06.195770   38479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:06.210940   38479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40353
	I0717 17:38:06.211298   38479 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:06.211741   38479 main.go:141] libmachine: Using API Version  1
	I0717 17:38:06.211763   38479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:06.212069   38479 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:06.212263   38479 main.go:141] libmachine: (ha-174628-m02) Calling .GetState
	I0717 17:38:06.213818   38479 status.go:330] ha-174628-m02 host status = "Stopped" (err=<nil>)
	I0717 17:38:06.213832   38479 status.go:343] host is not running, skipping remaining checks
	I0717 17:38:06.213838   38479 status.go:257] ha-174628-m02 status: &{Name:ha-174628-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:38:06.213853   38479 status.go:255] checking status of ha-174628-m03 ...
	I0717 17:38:06.214128   38479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:06.214163   38479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:06.229128   38479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35845
	I0717 17:38:06.229563   38479 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:06.230071   38479 main.go:141] libmachine: Using API Version  1
	I0717 17:38:06.230099   38479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:06.230375   38479 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:06.230598   38479 main.go:141] libmachine: (ha-174628-m03) Calling .GetState
	I0717 17:38:06.232132   38479 status.go:330] ha-174628-m03 host status = "Running" (err=<nil>)
	I0717 17:38:06.232144   38479 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:38:06.232438   38479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:06.232465   38479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:06.247475   38479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40857
	I0717 17:38:06.247931   38479 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:06.248455   38479 main.go:141] libmachine: Using API Version  1
	I0717 17:38:06.248476   38479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:06.248805   38479 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:06.249004   38479 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:38:06.251695   38479 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:38:06.252146   38479 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:38:06.252173   38479 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:38:06.252333   38479 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:38:06.252629   38479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:06.252657   38479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:06.268300   38479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35523
	I0717 17:38:06.268711   38479 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:06.269199   38479 main.go:141] libmachine: Using API Version  1
	I0717 17:38:06.269224   38479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:06.269583   38479 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:06.269776   38479 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:38:06.269989   38479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:38:06.270009   38479 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:38:06.272551   38479 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:38:06.273078   38479 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:38:06.273201   38479 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:38:06.273401   38479 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:38:06.273585   38479 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:38:06.273745   38479 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:38:06.273909   38479 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:38:06.356400   38479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:38:06.370772   38479 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:38:06.370795   38479 api_server.go:166] Checking apiserver status ...
	I0717 17:38:06.370831   38479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:38:06.387215   38479 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0717 17:38:06.396532   38479 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:38:06.396582   38479 ssh_runner.go:195] Run: ls
	I0717 17:38:06.400578   38479 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:38:06.404978   38479 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:38:06.404996   38479 status.go:422] ha-174628-m03 apiserver status = Running (err=<nil>)
	I0717 17:38:06.405004   38479 status.go:257] ha-174628-m03 status: &{Name:ha-174628-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:38:06.405016   38479 status.go:255] checking status of ha-174628-m04 ...
	I0717 17:38:06.405270   38479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:06.405300   38479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:06.422530   38479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38767
	I0717 17:38:06.422950   38479 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:06.423438   38479 main.go:141] libmachine: Using API Version  1
	I0717 17:38:06.423457   38479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:06.423775   38479 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:06.423977   38479 main.go:141] libmachine: (ha-174628-m04) Calling .GetState
	I0717 17:38:06.425449   38479 status.go:330] ha-174628-m04 host status = "Running" (err=<nil>)
	I0717 17:38:06.425463   38479 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:38:06.425742   38479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:06.425770   38479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:06.439934   38479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0717 17:38:06.440338   38479 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:06.440745   38479 main.go:141] libmachine: Using API Version  1
	I0717 17:38:06.440765   38479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:06.441075   38479 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:06.441235   38479 main.go:141] libmachine: (ha-174628-m04) Calling .GetIP
	I0717 17:38:06.443860   38479 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:38:06.444277   38479 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:38:06.444304   38479 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:38:06.444427   38479 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:38:06.444714   38479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:06.444743   38479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:06.459599   38479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36241
	I0717 17:38:06.460019   38479 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:06.460484   38479 main.go:141] libmachine: Using API Version  1
	I0717 17:38:06.460507   38479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:06.460862   38479 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:06.461094   38479 main.go:141] libmachine: (ha-174628-m04) Calling .DriverName
	I0717 17:38:06.461332   38479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:38:06.461357   38479 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHHostname
	I0717 17:38:06.463927   38479 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:38:06.464331   38479 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:38:06.464372   38479 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:38:06.464529   38479 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHPort
	I0717 17:38:06.464667   38479 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHKeyPath
	I0717 17:38:06.464850   38479 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHUsername
	I0717 17:38:06.464999   38479 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m04/id_rsa Username:docker}
	I0717 17:38:06.548180   38479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:38:06.562470   38479 status.go:257] ha-174628-m04 status: &{Name:ha-174628-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr: exit status 7 (601.014028ms)

                                                
                                                
-- stdout --
	ha-174628
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-174628-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:38:16.511533   38584 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:38:16.511623   38584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:38:16.511630   38584 out.go:304] Setting ErrFile to fd 2...
	I0717 17:38:16.511635   38584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:38:16.511823   38584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:38:16.511980   38584 out.go:298] Setting JSON to false
	I0717 17:38:16.512006   38584 mustload.go:65] Loading cluster: ha-174628
	I0717 17:38:16.512111   38584 notify.go:220] Checking for updates...
	I0717 17:38:16.512331   38584 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:38:16.512343   38584 status.go:255] checking status of ha-174628 ...
	I0717 17:38:16.512704   38584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:16.512751   38584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:16.531664   38584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39895
	I0717 17:38:16.532139   38584 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:16.532815   38584 main.go:141] libmachine: Using API Version  1
	I0717 17:38:16.532836   38584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:16.533272   38584 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:16.533470   38584 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:38:16.534972   38584 status.go:330] ha-174628 host status = "Running" (err=<nil>)
	I0717 17:38:16.534987   38584 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:38:16.535297   38584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:16.535348   38584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:16.549529   38584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41783
	I0717 17:38:16.549973   38584 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:16.550462   38584 main.go:141] libmachine: Using API Version  1
	I0717 17:38:16.550484   38584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:16.550780   38584 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:16.550961   38584 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:38:16.553929   38584 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:38:16.554399   38584 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:38:16.554429   38584 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:38:16.554590   38584 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:38:16.554872   38584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:16.554930   38584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:16.569110   38584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35411
	I0717 17:38:16.569518   38584 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:16.569979   38584 main.go:141] libmachine: Using API Version  1
	I0717 17:38:16.570002   38584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:16.570305   38584 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:16.570501   38584 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:38:16.570713   38584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:38:16.570744   38584 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:38:16.573525   38584 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:38:16.573971   38584 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:38:16.573987   38584 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:38:16.574113   38584 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:38:16.574314   38584 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:38:16.574473   38584 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:38:16.574715   38584 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:38:16.653501   38584 ssh_runner.go:195] Run: systemctl --version
	I0717 17:38:16.659643   38584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:38:16.674637   38584 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:38:16.674661   38584 api_server.go:166] Checking apiserver status ...
	I0717 17:38:16.674688   38584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:38:16.687320   38584 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup
	W0717 17:38:16.696926   38584 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1120/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:38:16.697008   38584 ssh_runner.go:195] Run: ls
	I0717 17:38:16.700737   38584 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:38:16.706432   38584 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:38:16.706458   38584 status.go:422] ha-174628 apiserver status = Running (err=<nil>)
	I0717 17:38:16.706470   38584 status.go:257] ha-174628 status: &{Name:ha-174628 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:38:16.706491   38584 status.go:255] checking status of ha-174628-m02 ...
	I0717 17:38:16.706850   38584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:16.706883   38584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:16.721570   38584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33119
	I0717 17:38:16.722117   38584 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:16.722640   38584 main.go:141] libmachine: Using API Version  1
	I0717 17:38:16.722666   38584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:16.723031   38584 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:16.723202   38584 main.go:141] libmachine: (ha-174628-m02) Calling .GetState
	I0717 17:38:16.724786   38584 status.go:330] ha-174628-m02 host status = "Stopped" (err=<nil>)
	I0717 17:38:16.724797   38584 status.go:343] host is not running, skipping remaining checks
	I0717 17:38:16.724803   38584 status.go:257] ha-174628-m02 status: &{Name:ha-174628-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:38:16.724817   38584 status.go:255] checking status of ha-174628-m03 ...
	I0717 17:38:16.725198   38584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:16.725235   38584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:16.739818   38584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39511
	I0717 17:38:16.740165   38584 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:16.740651   38584 main.go:141] libmachine: Using API Version  1
	I0717 17:38:16.740677   38584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:16.741019   38584 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:16.741208   38584 main.go:141] libmachine: (ha-174628-m03) Calling .GetState
	I0717 17:38:16.742820   38584 status.go:330] ha-174628-m03 host status = "Running" (err=<nil>)
	I0717 17:38:16.742836   38584 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:38:16.743161   38584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:16.743197   38584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:16.757655   38584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45271
	I0717 17:38:16.758023   38584 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:16.758460   38584 main.go:141] libmachine: Using API Version  1
	I0717 17:38:16.758481   38584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:16.758809   38584 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:16.758993   38584 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:38:16.761747   38584 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:38:16.762097   38584 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:38:16.762141   38584 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:38:16.762290   38584 host.go:66] Checking if "ha-174628-m03" exists ...
	I0717 17:38:16.762607   38584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:16.762643   38584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:16.776868   38584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43993
	I0717 17:38:16.777224   38584 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:16.777647   38584 main.go:141] libmachine: Using API Version  1
	I0717 17:38:16.777669   38584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:16.777940   38584 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:16.778124   38584 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:38:16.778289   38584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:38:16.778372   38584 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:38:16.781659   38584 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:38:16.782091   38584 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:38:16.782128   38584 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:38:16.782280   38584 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:38:16.782434   38584 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:38:16.782610   38584 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:38:16.782793   38584 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:38:16.860029   38584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:38:16.880682   38584 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:38:16.880708   38584 api_server.go:166] Checking apiserver status ...
	I0717 17:38:16.880742   38584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:38:16.893941   38584 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0717 17:38:16.902037   38584 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:38:16.902080   38584 ssh_runner.go:195] Run: ls
	I0717 17:38:16.906021   38584 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:38:16.912163   38584 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:38:16.912182   38584 status.go:422] ha-174628-m03 apiserver status = Running (err=<nil>)
	I0717 17:38:16.912192   38584 status.go:257] ha-174628-m03 status: &{Name:ha-174628-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:38:16.912211   38584 status.go:255] checking status of ha-174628-m04 ...
	I0717 17:38:16.912555   38584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:16.912588   38584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:16.927138   38584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33739
	I0717 17:38:16.927495   38584 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:16.927894   38584 main.go:141] libmachine: Using API Version  1
	I0717 17:38:16.927912   38584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:16.928171   38584 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:16.928402   38584 main.go:141] libmachine: (ha-174628-m04) Calling .GetState
	I0717 17:38:16.930359   38584 status.go:330] ha-174628-m04 host status = "Running" (err=<nil>)
	I0717 17:38:16.930375   38584 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:38:16.930764   38584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:16.930812   38584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:16.945724   38584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
	I0717 17:38:16.946113   38584 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:16.946551   38584 main.go:141] libmachine: Using API Version  1
	I0717 17:38:16.946573   38584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:16.947045   38584 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:16.947239   38584 main.go:141] libmachine: (ha-174628-m04) Calling .GetIP
	I0717 17:38:16.950065   38584 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:38:16.950496   38584 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:38:16.950516   38584 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:38:16.950637   38584 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:38:16.950973   38584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:16.951018   38584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:16.965282   38584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42151
	I0717 17:38:16.965687   38584 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:16.966151   38584 main.go:141] libmachine: Using API Version  1
	I0717 17:38:16.966170   38584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:16.966526   38584 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:16.966716   38584 main.go:141] libmachine: (ha-174628-m04) Calling .DriverName
	I0717 17:38:16.966876   38584 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:38:16.966892   38584 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHHostname
	I0717 17:38:16.969593   38584 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:38:16.970008   38584 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:38:16.970029   38584 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:38:16.970222   38584 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHPort
	I0717 17:38:16.970424   38584 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHKeyPath
	I0717 17:38:16.970608   38584 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHUsername
	I0717 17:38:16.970765   38584 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m04/id_rsa Username:docker}
	I0717 17:38:17.055911   38584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:38:17.070845   38584 status.go:257] ha-174628-m04 status: &{Name:ha-174628-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174628 -n ha-174628
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174628 logs -n 25: (1.282881197s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628:/home/docker/cp-test_ha-174628-m03_ha-174628.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628 sudo cat                                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m03_ha-174628.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m02:/home/docker/cp-test_ha-174628-m03_ha-174628-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m02 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m03_ha-174628-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04:/home/docker/cp-test_ha-174628-m03_ha-174628-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m04 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m03_ha-174628-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-174628 cp testdata/cp-test.txt                                                | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3227756898/001/cp-test_ha-174628-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628:/home/docker/cp-test_ha-174628-m04_ha-174628.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628 sudo cat                                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m04_ha-174628.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m02:/home/docker/cp-test_ha-174628-m04_ha-174628-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m02 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m04_ha-174628-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03:/home/docker/cp-test_ha-174628-m04_ha-174628-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m03 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m04_ha-174628-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-174628 node stop m02 -v=7                                                     | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-174628 node start m02 -v=7                                                    | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 17:29:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 17:29:16.325220   32725 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:29:16.325468   32725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:29:16.325475   32725 out.go:304] Setting ErrFile to fd 2...
	I0717 17:29:16.325479   32725 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:29:16.325665   32725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:29:16.326208   32725 out.go:298] Setting JSON to false
	I0717 17:29:16.327076   32725 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4299,"bootTime":1721233057,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 17:29:16.327136   32725 start.go:139] virtualization: kvm guest
	I0717 17:29:16.329100   32725 out.go:177] * [ha-174628] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 17:29:16.330382   32725 notify.go:220] Checking for updates...
	I0717 17:29:16.330414   32725 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 17:29:16.331726   32725 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 17:29:16.333057   32725 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:29:16.334184   32725 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:29:16.335435   32725 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 17:29:16.336607   32725 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 17:29:16.338066   32725 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 17:29:16.373367   32725 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 17:29:16.374791   32725 start.go:297] selected driver: kvm2
	I0717 17:29:16.374813   32725 start.go:901] validating driver "kvm2" against <nil>
	I0717 17:29:16.374825   32725 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 17:29:16.375499   32725 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:29:16.375578   32725 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 17:29:16.390884   32725 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 17:29:16.390942   32725 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 17:29:16.391158   32725 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 17:29:16.391218   32725 cni.go:84] Creating CNI manager for ""
	I0717 17:29:16.391229   32725 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0717 17:29:16.391234   32725 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 17:29:16.391297   32725 start.go:340] cluster config:
	{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0717 17:29:16.391379   32725 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:29:16.393078   32725 out.go:177] * Starting "ha-174628" primary control-plane node in "ha-174628" cluster
	I0717 17:29:16.394342   32725 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:29:16.394375   32725 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 17:29:16.394410   32725 cache.go:56] Caching tarball of preloaded images
	I0717 17:29:16.394484   32725 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 17:29:16.394493   32725 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 17:29:16.394776   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:29:16.394795   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json: {Name:mk775845471b87c734d3c09d31cd9902fcebfad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:16.394910   32725 start.go:360] acquireMachinesLock for ha-174628: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 17:29:16.394935   32725 start.go:364] duration metric: took 14.63µs to acquireMachinesLock for "ha-174628"
	I0717 17:29:16.394952   32725 start.go:93] Provisioning new machine with config: &{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:29:16.395005   32725 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 17:29:16.396649   32725 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 17:29:16.396775   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:29:16.396806   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:29:16.410681   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40869
	I0717 17:29:16.411151   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:29:16.411676   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:29:16.411698   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:29:16.412056   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:29:16.412243   32725 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:29:16.412423   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:16.412557   32725 start.go:159] libmachine.API.Create for "ha-174628" (driver="kvm2")
	I0717 17:29:16.412586   32725 client.go:168] LocalClient.Create starting
	I0717 17:29:16.412634   32725 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 17:29:16.412669   32725 main.go:141] libmachine: Decoding PEM data...
	I0717 17:29:16.412692   32725 main.go:141] libmachine: Parsing certificate...
	I0717 17:29:16.412752   32725 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 17:29:16.412777   32725 main.go:141] libmachine: Decoding PEM data...
	I0717 17:29:16.412794   32725 main.go:141] libmachine: Parsing certificate...
	I0717 17:29:16.412821   32725 main.go:141] libmachine: Running pre-create checks...
	I0717 17:29:16.412846   32725 main.go:141] libmachine: (ha-174628) Calling .PreCreateCheck
	I0717 17:29:16.413189   32725 main.go:141] libmachine: (ha-174628) Calling .GetConfigRaw
	I0717 17:29:16.413569   32725 main.go:141] libmachine: Creating machine...
	I0717 17:29:16.413583   32725 main.go:141] libmachine: (ha-174628) Calling .Create
	I0717 17:29:16.413753   32725 main.go:141] libmachine: (ha-174628) Creating KVM machine...
	I0717 17:29:16.415006   32725 main.go:141] libmachine: (ha-174628) DBG | found existing default KVM network
	I0717 17:29:16.415670   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:16.415530   32748 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0717 17:29:16.415694   32725 main.go:141] libmachine: (ha-174628) DBG | created network xml: 
	I0717 17:29:16.415712   32725 main.go:141] libmachine: (ha-174628) DBG | <network>
	I0717 17:29:16.415727   32725 main.go:141] libmachine: (ha-174628) DBG |   <name>mk-ha-174628</name>
	I0717 17:29:16.415739   32725 main.go:141] libmachine: (ha-174628) DBG |   <dns enable='no'/>
	I0717 17:29:16.415749   32725 main.go:141] libmachine: (ha-174628) DBG |   
	I0717 17:29:16.415760   32725 main.go:141] libmachine: (ha-174628) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 17:29:16.415769   32725 main.go:141] libmachine: (ha-174628) DBG |     <dhcp>
	I0717 17:29:16.415782   32725 main.go:141] libmachine: (ha-174628) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 17:29:16.415792   32725 main.go:141] libmachine: (ha-174628) DBG |     </dhcp>
	I0717 17:29:16.415818   32725 main.go:141] libmachine: (ha-174628) DBG |   </ip>
	I0717 17:29:16.415836   32725 main.go:141] libmachine: (ha-174628) DBG |   
	I0717 17:29:16.415846   32725 main.go:141] libmachine: (ha-174628) DBG | </network>
	I0717 17:29:16.415851   32725 main.go:141] libmachine: (ha-174628) DBG | 
	I0717 17:29:16.420571   32725 main.go:141] libmachine: (ha-174628) DBG | trying to create private KVM network mk-ha-174628 192.168.39.0/24...
	I0717 17:29:16.483371   32725 main.go:141] libmachine: (ha-174628) DBG | private KVM network mk-ha-174628 192.168.39.0/24 created
	I0717 17:29:16.483396   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:16.483325   32748 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:29:16.483407   32725 main.go:141] libmachine: (ha-174628) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628 ...
	I0717 17:29:16.483423   32725 main.go:141] libmachine: (ha-174628) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 17:29:16.483520   32725 main.go:141] libmachine: (ha-174628) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 17:29:16.710849   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:16.710721   32748 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa...
	I0717 17:29:16.898456   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:16.898351   32748 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/ha-174628.rawdisk...
	I0717 17:29:16.898499   32725 main.go:141] libmachine: (ha-174628) DBG | Writing magic tar header
	I0717 17:29:16.898516   32725 main.go:141] libmachine: (ha-174628) DBG | Writing SSH key tar header
	I0717 17:29:16.898529   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:16.898460   32748 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628 ...
	I0717 17:29:16.898595   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628
	I0717 17:29:16.898615   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 17:29:16.898623   32725 main.go:141] libmachine: (ha-174628) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628 (perms=drwx------)
	I0717 17:29:16.898630   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:29:16.898636   32725 main.go:141] libmachine: (ha-174628) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 17:29:16.898768   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 17:29:16.898786   32725 main.go:141] libmachine: (ha-174628) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 17:29:16.898802   32725 main.go:141] libmachine: (ha-174628) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 17:29:16.898815   32725 main.go:141] libmachine: (ha-174628) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 17:29:16.898831   32725 main.go:141] libmachine: (ha-174628) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 17:29:16.898841   32725 main.go:141] libmachine: (ha-174628) Creating domain...
	I0717 17:29:16.898884   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 17:29:16.898904   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home/jenkins
	I0717 17:29:16.898915   32725 main.go:141] libmachine: (ha-174628) DBG | Checking permissions on dir: /home
	I0717 17:29:16.898924   32725 main.go:141] libmachine: (ha-174628) DBG | Skipping /home - not owner
	I0717 17:29:16.899920   32725 main.go:141] libmachine: (ha-174628) define libvirt domain using xml: 
	I0717 17:29:16.899942   32725 main.go:141] libmachine: (ha-174628) <domain type='kvm'>
	I0717 17:29:16.899953   32725 main.go:141] libmachine: (ha-174628)   <name>ha-174628</name>
	I0717 17:29:16.899964   32725 main.go:141] libmachine: (ha-174628)   <memory unit='MiB'>2200</memory>
	I0717 17:29:16.899976   32725 main.go:141] libmachine: (ha-174628)   <vcpu>2</vcpu>
	I0717 17:29:16.899984   32725 main.go:141] libmachine: (ha-174628)   <features>
	I0717 17:29:16.899994   32725 main.go:141] libmachine: (ha-174628)     <acpi/>
	I0717 17:29:16.900004   32725 main.go:141] libmachine: (ha-174628)     <apic/>
	I0717 17:29:16.900011   32725 main.go:141] libmachine: (ha-174628)     <pae/>
	I0717 17:29:16.900026   32725 main.go:141] libmachine: (ha-174628)     
	I0717 17:29:16.900049   32725 main.go:141] libmachine: (ha-174628)   </features>
	I0717 17:29:16.900067   32725 main.go:141] libmachine: (ha-174628)   <cpu mode='host-passthrough'>
	I0717 17:29:16.900091   32725 main.go:141] libmachine: (ha-174628)   
	I0717 17:29:16.900110   32725 main.go:141] libmachine: (ha-174628)   </cpu>
	I0717 17:29:16.900124   32725 main.go:141] libmachine: (ha-174628)   <os>
	I0717 17:29:16.900141   32725 main.go:141] libmachine: (ha-174628)     <type>hvm</type>
	I0717 17:29:16.900152   32725 main.go:141] libmachine: (ha-174628)     <boot dev='cdrom'/>
	I0717 17:29:16.900162   32725 main.go:141] libmachine: (ha-174628)     <boot dev='hd'/>
	I0717 17:29:16.900170   32725 main.go:141] libmachine: (ha-174628)     <bootmenu enable='no'/>
	I0717 17:29:16.900180   32725 main.go:141] libmachine: (ha-174628)   </os>
	I0717 17:29:16.900188   32725 main.go:141] libmachine: (ha-174628)   <devices>
	I0717 17:29:16.900199   32725 main.go:141] libmachine: (ha-174628)     <disk type='file' device='cdrom'>
	I0717 17:29:16.900215   32725 main.go:141] libmachine: (ha-174628)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/boot2docker.iso'/>
	I0717 17:29:16.900226   32725 main.go:141] libmachine: (ha-174628)       <target dev='hdc' bus='scsi'/>
	I0717 17:29:16.900237   32725 main.go:141] libmachine: (ha-174628)       <readonly/>
	I0717 17:29:16.900246   32725 main.go:141] libmachine: (ha-174628)     </disk>
	I0717 17:29:16.900255   32725 main.go:141] libmachine: (ha-174628)     <disk type='file' device='disk'>
	I0717 17:29:16.900266   32725 main.go:141] libmachine: (ha-174628)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 17:29:16.900282   32725 main.go:141] libmachine: (ha-174628)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/ha-174628.rawdisk'/>
	I0717 17:29:16.900291   32725 main.go:141] libmachine: (ha-174628)       <target dev='hda' bus='virtio'/>
	I0717 17:29:16.900301   32725 main.go:141] libmachine: (ha-174628)     </disk>
	I0717 17:29:16.900312   32725 main.go:141] libmachine: (ha-174628)     <interface type='network'>
	I0717 17:29:16.900348   32725 main.go:141] libmachine: (ha-174628)       <source network='mk-ha-174628'/>
	I0717 17:29:16.900367   32725 main.go:141] libmachine: (ha-174628)       <model type='virtio'/>
	I0717 17:29:16.900387   32725 main.go:141] libmachine: (ha-174628)     </interface>
	I0717 17:29:16.900399   32725 main.go:141] libmachine: (ha-174628)     <interface type='network'>
	I0717 17:29:16.900405   32725 main.go:141] libmachine: (ha-174628)       <source network='default'/>
	I0717 17:29:16.900411   32725 main.go:141] libmachine: (ha-174628)       <model type='virtio'/>
	I0717 17:29:16.900417   32725 main.go:141] libmachine: (ha-174628)     </interface>
	I0717 17:29:16.900423   32725 main.go:141] libmachine: (ha-174628)     <serial type='pty'>
	I0717 17:29:16.900429   32725 main.go:141] libmachine: (ha-174628)       <target port='0'/>
	I0717 17:29:16.900435   32725 main.go:141] libmachine: (ha-174628)     </serial>
	I0717 17:29:16.900440   32725 main.go:141] libmachine: (ha-174628)     <console type='pty'>
	I0717 17:29:16.900447   32725 main.go:141] libmachine: (ha-174628)       <target type='serial' port='0'/>
	I0717 17:29:16.900452   32725 main.go:141] libmachine: (ha-174628)     </console>
	I0717 17:29:16.900457   32725 main.go:141] libmachine: (ha-174628)     <rng model='virtio'>
	I0717 17:29:16.900463   32725 main.go:141] libmachine: (ha-174628)       <backend model='random'>/dev/random</backend>
	I0717 17:29:16.900467   32725 main.go:141] libmachine: (ha-174628)     </rng>
	I0717 17:29:16.900472   32725 main.go:141] libmachine: (ha-174628)     
	I0717 17:29:16.900478   32725 main.go:141] libmachine: (ha-174628)     
	I0717 17:29:16.900494   32725 main.go:141] libmachine: (ha-174628)   </devices>
	I0717 17:29:16.900515   32725 main.go:141] libmachine: (ha-174628) </domain>
	I0717 17:29:16.900527   32725 main.go:141] libmachine: (ha-174628) 
	I0717 17:29:16.904662   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:d8:65:e3 in network default
	I0717 17:29:16.905248   32725 main.go:141] libmachine: (ha-174628) Ensuring networks are active...
	I0717 17:29:16.905287   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:16.905890   32725 main.go:141] libmachine: (ha-174628) Ensuring network default is active
	I0717 17:29:16.906159   32725 main.go:141] libmachine: (ha-174628) Ensuring network mk-ha-174628 is active
	I0717 17:29:16.906607   32725 main.go:141] libmachine: (ha-174628) Getting domain xml...
	I0717 17:29:16.907349   32725 main.go:141] libmachine: (ha-174628) Creating domain...
	I0717 17:29:18.083624   32725 main.go:141] libmachine: (ha-174628) Waiting to get IP...
	I0717 17:29:18.084593   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:18.085066   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:18.085089   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:18.085028   32748 retry.go:31] will retry after 198.059319ms: waiting for machine to come up
	I0717 17:29:18.284591   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:18.285099   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:18.285136   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:18.285044   32748 retry.go:31] will retry after 315.863924ms: waiting for machine to come up
	I0717 17:29:18.602704   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:18.603281   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:18.603312   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:18.603233   32748 retry.go:31] will retry after 365.595994ms: waiting for machine to come up
	I0717 17:29:18.970866   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:18.971206   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:18.971232   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:18.971160   32748 retry.go:31] will retry after 446.072916ms: waiting for machine to come up
	I0717 17:29:19.418679   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:19.419148   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:19.419178   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:19.419082   32748 retry.go:31] will retry after 612.766182ms: waiting for machine to come up
	I0717 17:29:20.034068   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:20.034510   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:20.034538   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:20.034463   32748 retry.go:31] will retry after 865.493851ms: waiting for machine to come up
	I0717 17:29:20.901494   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:20.901946   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:20.901983   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:20.901912   32748 retry.go:31] will retry after 784.975912ms: waiting for machine to come up
	I0717 17:29:21.688270   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:21.688649   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:21.688677   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:21.688600   32748 retry.go:31] will retry after 1.259680032s: waiting for machine to come up
	I0717 17:29:22.949945   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:22.950369   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:22.950393   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:22.950302   32748 retry.go:31] will retry after 1.397281939s: waiting for machine to come up
	I0717 17:29:24.348792   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:24.349222   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:24.349243   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:24.349144   32748 retry.go:31] will retry after 1.757971792s: waiting for machine to come up
	I0717 17:29:26.109282   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:26.109745   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:26.109783   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:26.109714   32748 retry.go:31] will retry after 1.976185642s: waiting for machine to come up
	I0717 17:29:28.087845   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:28.088250   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:28.088269   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:28.088214   32748 retry.go:31] will retry after 3.419200588s: waiting for machine to come up
	I0717 17:29:31.509234   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:31.509640   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find current IP address of domain ha-174628 in network mk-ha-174628
	I0717 17:29:31.509661   32725 main.go:141] libmachine: (ha-174628) DBG | I0717 17:29:31.509602   32748 retry.go:31] will retry after 3.616430336s: waiting for machine to come up
	I0717 17:29:35.130399   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.130939   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has current primary IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.130955   32725 main.go:141] libmachine: (ha-174628) Found IP for machine: 192.168.39.100
	I0717 17:29:35.130967   32725 main.go:141] libmachine: (ha-174628) Reserving static IP address...
	I0717 17:29:35.131422   32725 main.go:141] libmachine: (ha-174628) DBG | unable to find host DHCP lease matching {name: "ha-174628", mac: "52:54:00:2f:44:49", ip: "192.168.39.100"} in network mk-ha-174628
	I0717 17:29:35.202327   32725 main.go:141] libmachine: (ha-174628) DBG | Getting to WaitForSSH function...
	I0717 17:29:35.202406   32725 main.go:141] libmachine: (ha-174628) Reserved static IP address: 192.168.39.100
	I0717 17:29:35.202422   32725 main.go:141] libmachine: (ha-174628) Waiting for SSH to be available...
	I0717 17:29:35.204817   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.205248   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.205276   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.205479   32725 main.go:141] libmachine: (ha-174628) DBG | Using SSH client type: external
	I0717 17:29:35.205509   32725 main.go:141] libmachine: (ha-174628) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa (-rw-------)
	I0717 17:29:35.205555   32725 main.go:141] libmachine: (ha-174628) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 17:29:35.205571   32725 main.go:141] libmachine: (ha-174628) DBG | About to run SSH command:
	I0717 17:29:35.205589   32725 main.go:141] libmachine: (ha-174628) DBG | exit 0
	I0717 17:29:35.325324   32725 main.go:141] libmachine: (ha-174628) DBG | SSH cmd err, output: <nil>: 
	I0717 17:29:35.325546   32725 main.go:141] libmachine: (ha-174628) KVM machine creation complete!
	I0717 17:29:35.325833   32725 main.go:141] libmachine: (ha-174628) Calling .GetConfigRaw
	I0717 17:29:35.326468   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:35.326701   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:35.326860   32725 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 17:29:35.326874   32725 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:29:35.328025   32725 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 17:29:35.328041   32725 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 17:29:35.328049   32725 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 17:29:35.328058   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:35.329977   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.330280   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.330297   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.330428   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:35.330596   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.330732   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.330846   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:35.331005   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:29:35.331233   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:29:35.331248   32725 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 17:29:35.427908   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:29:35.427932   32725 main.go:141] libmachine: Detecting the provisioner...
	I0717 17:29:35.427940   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:35.430644   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.430977   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.431014   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.431078   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:35.431297   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.431462   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.431630   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:35.431782   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:29:35.431950   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:29:35.431960   32725 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 17:29:35.529089   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 17:29:35.529169   32725 main.go:141] libmachine: found compatible host: buildroot
	I0717 17:29:35.529188   32725 main.go:141] libmachine: Provisioning with buildroot...
	I0717 17:29:35.529197   32725 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:29:35.529475   32725 buildroot.go:166] provisioning hostname "ha-174628"
	I0717 17:29:35.529501   32725 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:29:35.529704   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:35.532164   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.532489   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.532511   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.532612   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:35.532804   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.532982   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.533109   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:35.533270   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:29:35.533478   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:29:35.533495   32725 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174628 && echo "ha-174628" | sudo tee /etc/hostname
	I0717 17:29:35.642130   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174628
	
	I0717 17:29:35.642159   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:35.644864   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.645232   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.645256   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.645499   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:35.645684   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.645823   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.645936   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:35.646091   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:29:35.646296   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:29:35.646312   32725 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174628/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 17:29:35.752600   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:29:35.752628   32725 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 17:29:35.752668   32725 buildroot.go:174] setting up certificates
	I0717 17:29:35.752678   32725 provision.go:84] configureAuth start
	I0717 17:29:35.752689   32725 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:29:35.753010   32725 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:29:35.755301   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.755669   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.755694   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.755836   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:35.757707   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.757969   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.757991   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.758111   32725 provision.go:143] copyHostCerts
	I0717 17:29:35.758147   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:29:35.758183   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 17:29:35.758199   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:29:35.758268   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 17:29:35.758365   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:29:35.758389   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 17:29:35.758398   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:29:35.758434   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 17:29:35.758490   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:29:35.758516   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 17:29:35.758525   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:29:35.758556   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 17:29:35.758632   32725 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.ha-174628 san=[127.0.0.1 192.168.39.100 ha-174628 localhost minikube]
	I0717 17:29:35.994348   32725 provision.go:177] copyRemoteCerts
	I0717 17:29:35.994408   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 17:29:35.994434   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:35.997151   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.997448   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:35.997477   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:35.997628   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:35.997802   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:35.997938   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:35.998093   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:29:36.079128   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 17:29:36.079215   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 17:29:36.102149   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 17:29:36.102225   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0717 17:29:36.123207   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 17:29:36.123268   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 17:29:36.143905   32725 provision.go:87] duration metric: took 391.212994ms to configureAuth
	I0717 17:29:36.143927   32725 buildroot.go:189] setting minikube options for container-runtime
	I0717 17:29:36.144095   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:29:36.144175   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:36.147235   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.147626   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.147652   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.147806   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:36.148011   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.148200   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.148358   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:36.148671   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:29:36.148837   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:29:36.148853   32725 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 17:29:36.389437   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 17:29:36.389459   32725 main.go:141] libmachine: Checking connection to Docker...
	I0717 17:29:36.389469   32725 main.go:141] libmachine: (ha-174628) Calling .GetURL
	I0717 17:29:36.391084   32725 main.go:141] libmachine: (ha-174628) DBG | Using libvirt version 6000000
	I0717 17:29:36.393220   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.393489   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.393507   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.393720   32725 main.go:141] libmachine: Docker is up and running!
	I0717 17:29:36.393740   32725 main.go:141] libmachine: Reticulating splines...
	I0717 17:29:36.393747   32725 client.go:171] duration metric: took 19.981151074s to LocalClient.Create
	I0717 17:29:36.393772   32725 start.go:167] duration metric: took 19.981216102s to libmachine.API.Create "ha-174628"
	I0717 17:29:36.393782   32725 start.go:293] postStartSetup for "ha-174628" (driver="kvm2")
	I0717 17:29:36.393795   32725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 17:29:36.393816   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:36.394051   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 17:29:36.394082   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:36.396019   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.396337   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.396360   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.396489   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:36.396680   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.396845   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:36.396988   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:29:36.474390   32725 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 17:29:36.478254   32725 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 17:29:36.478277   32725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 17:29:36.478351   32725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 17:29:36.478437   32725 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 17:29:36.478450   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /etc/ssl/certs/215772.pem
	I0717 17:29:36.478563   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 17:29:36.487094   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:29:36.508316   32725 start.go:296] duration metric: took 114.523323ms for postStartSetup
	I0717 17:29:36.508386   32725 main.go:141] libmachine: (ha-174628) Calling .GetConfigRaw
	I0717 17:29:36.508909   32725 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:29:36.511347   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.511701   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.511728   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.511910   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:29:36.512089   32725 start.go:128] duration metric: took 20.117074786s to createHost
	I0717 17:29:36.512112   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:36.514288   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.514596   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.514616   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.514768   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:36.514934   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.515092   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.515211   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:36.515345   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:29:36.515497   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:29:36.515509   32725 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 17:29:36.613086   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237376.586010587
	
	I0717 17:29:36.613107   32725 fix.go:216] guest clock: 1721237376.586010587
	I0717 17:29:36.613114   32725 fix.go:229] Guest: 2024-07-17 17:29:36.586010587 +0000 UTC Remote: 2024-07-17 17:29:36.512100213 +0000 UTC m=+20.219026136 (delta=73.910374ms)
	I0717 17:29:36.613144   32725 fix.go:200] guest clock delta is within tolerance: 73.910374ms
	I0717 17:29:36.613149   32725 start.go:83] releasing machines lock for "ha-174628", held for 20.218205036s
	I0717 17:29:36.613166   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:36.613425   32725 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:29:36.615673   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.615986   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.616011   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.616160   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:36.616571   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:36.616781   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:29:36.616853   32725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 17:29:36.616884   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:36.617024   32725 ssh_runner.go:195] Run: cat /version.json
	I0717 17:29:36.617044   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:29:36.619217   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.619289   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.619604   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.619630   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.619656   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:36.619677   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:36.619888   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:36.619967   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:29:36.620040   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.620118   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:29:36.620169   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:36.620223   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:29:36.620267   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:29:36.620350   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:29:36.723966   32725 ssh_runner.go:195] Run: systemctl --version
	I0717 17:29:36.729560   32725 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 17:29:36.880903   32725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 17:29:36.886275   32725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 17:29:36.886329   32725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 17:29:36.901625   32725 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 17:29:36.901651   32725 start.go:495] detecting cgroup driver to use...
	I0717 17:29:36.901710   32725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 17:29:36.917240   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 17:29:36.930316   32725 docker.go:217] disabling cri-docker service (if available) ...
	I0717 17:29:36.930375   32725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 17:29:36.943285   32725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 17:29:36.956166   32725 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 17:29:37.080316   32725 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 17:29:37.234412   32725 docker.go:233] disabling docker service ...
	I0717 17:29:37.234487   32725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 17:29:37.247741   32725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 17:29:37.259812   32725 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 17:29:37.368136   32725 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 17:29:37.473852   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 17:29:37.486903   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 17:29:37.503326   32725 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 17:29:37.503378   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.512631   32725 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 17:29:37.512685   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.521938   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.531128   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.540253   32725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 17:29:37.549895   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.559224   32725 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.575215   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:29:37.585513   32725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 17:29:37.594121   32725 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 17:29:37.594176   32725 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 17:29:37.605708   32725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 17:29:37.614736   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:29:37.728181   32725 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 17:29:37.860375   32725 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 17:29:37.860465   32725 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 17:29:37.864661   32725 start.go:563] Will wait 60s for crictl version
	I0717 17:29:37.864712   32725 ssh_runner.go:195] Run: which crictl
	I0717 17:29:37.868011   32725 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 17:29:37.903302   32725 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 17:29:37.903407   32725 ssh_runner.go:195] Run: crio --version
	I0717 17:29:37.930645   32725 ssh_runner.go:195] Run: crio --version
	I0717 17:29:37.958113   32725 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 17:29:37.959294   32725 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:29:37.961924   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:37.962213   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:29:37.962231   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:29:37.962456   32725 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 17:29:37.966414   32725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:29:37.978440   32725 kubeadm.go:883] updating cluster {Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 17:29:37.978537   32725 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:29:37.978582   32725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 17:29:38.007842   32725 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 17:29:38.007955   32725 ssh_runner.go:195] Run: which lz4
	I0717 17:29:38.011775   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 17:29:38.011872   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 17:29:38.015704   32725 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 17:29:38.015736   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 17:29:39.213244   32725 crio.go:462] duration metric: took 1.201400295s to copy over tarball
	I0717 17:29:39.213306   32725 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 17:29:41.331453   32725 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.118121996s)
	I0717 17:29:41.331482   32725 crio.go:469] duration metric: took 2.118216371s to extract the tarball
	I0717 17:29:41.331489   32725 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 17:29:41.368676   32725 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 17:29:41.409761   32725 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 17:29:41.409780   32725 cache_images.go:84] Images are preloaded, skipping loading
	I0717 17:29:41.409787   32725 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.30.2 crio true true} ...
	I0717 17:29:41.409910   32725 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 17:29:41.409976   32725 ssh_runner.go:195] Run: crio config
	I0717 17:29:41.453071   32725 cni.go:84] Creating CNI manager for ""
	I0717 17:29:41.453088   32725 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 17:29:41.453096   32725 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 17:29:41.453116   32725 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174628 NodeName:ha-174628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 17:29:41.453274   32725 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174628"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 17:29:41.453297   32725 kube-vip.go:115] generating kube-vip config ...
	I0717 17:29:41.453345   32725 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 17:29:41.468281   32725 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 17:29:41.468385   32725 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0717 17:29:41.468437   32725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 17:29:41.477282   32725 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 17:29:41.477353   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 17:29:41.485995   32725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 17:29:41.501698   32725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 17:29:41.516538   32725 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 17:29:41.531329   32725 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0717 17:29:41.546255   32725 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 17:29:41.549735   32725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:29:41.560619   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:29:41.682551   32725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:29:41.698891   32725 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628 for IP: 192.168.39.100
	I0717 17:29:41.698912   32725 certs.go:194] generating shared ca certs ...
	I0717 17:29:41.698928   32725 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:41.699093   32725 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 17:29:41.699134   32725 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 17:29:41.699144   32725 certs.go:256] generating profile certs ...
	I0717 17:29:41.699195   32725 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key
	I0717 17:29:41.699210   32725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.crt with IP's: []
	I0717 17:29:41.761284   32725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.crt ...
	I0717 17:29:41.761310   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.crt: {Name:mkaa550cef907e86645a1b32cef4325a9904274f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:41.761468   32725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key ...
	I0717 17:29:41.761478   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key: {Name:mk93234ccb835983ded185c78683a2d2955acd08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:41.761558   32725 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.1f3a6050
	I0717 17:29:41.761592   32725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.1f3a6050 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.254]
	I0717 17:29:41.926788   32725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.1f3a6050 ...
	I0717 17:29:41.926815   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.1f3a6050: {Name:mk6c2c70563a3c319a0aa70f1dbcd8aa0b83e8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:41.926980   32725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.1f3a6050 ...
	I0717 17:29:41.926992   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.1f3a6050: {Name:mka7c12426d9818e100dfaa475f8fa1cd5c6ed78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:41.927072   32725 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.1f3a6050 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt
	I0717 17:29:41.927156   32725 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.1f3a6050 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key
	I0717 17:29:41.927212   32725 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key
	I0717 17:29:41.927226   32725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt with IP's: []
	I0717 17:29:42.096708   32725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt ...
	I0717 17:29:42.096736   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt: {Name:mk296ad0cadac71acfe92f700f1e2191c1858ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:42.096881   32725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key ...
	I0717 17:29:42.096890   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key: {Name:mkfb2f7a0dce8485740f966f03539930631a194b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:29:42.096969   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 17:29:42.096985   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 17:29:42.096995   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 17:29:42.097005   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 17:29:42.097017   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 17:29:42.097027   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 17:29:42.097039   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 17:29:42.097048   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 17:29:42.097092   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 17:29:42.097125   32725 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 17:29:42.097134   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 17:29:42.097154   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 17:29:42.097177   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 17:29:42.097199   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 17:29:42.097278   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:29:42.097312   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /usr/share/ca-certificates/215772.pem
	I0717 17:29:42.097325   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:29:42.097338   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem -> /usr/share/ca-certificates/21577.pem
	I0717 17:29:42.097949   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 17:29:42.122140   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 17:29:42.143829   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 17:29:42.165164   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 17:29:42.186851   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 17:29:42.208088   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 17:29:42.229178   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 17:29:42.251527   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 17:29:42.272911   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 17:29:42.294253   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 17:29:42.317178   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 17:29:42.338795   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 17:29:42.354136   32725 ssh_runner.go:195] Run: openssl version
	I0717 17:29:42.359686   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 17:29:42.369732   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 17:29:42.373701   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 17:29:42.373759   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 17:29:42.379183   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 17:29:42.389308   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 17:29:42.399571   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:29:42.403780   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:29:42.403834   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:29:42.409017   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 17:29:42.419127   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 17:29:42.429155   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 17:29:42.433281   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 17:29:42.433338   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 17:29:42.438609   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 17:29:42.448527   32725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 17:29:42.452244   32725 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 17:29:42.452306   32725 kubeadm.go:392] StartCluster: {Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:29:42.452388   32725 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 17:29:42.452437   32725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 17:29:42.495018   32725 cri.go:89] found id: ""
	I0717 17:29:42.495097   32725 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 17:29:42.507394   32725 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 17:29:42.517759   32725 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 17:29:42.529392   32725 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 17:29:42.529413   32725 kubeadm.go:157] found existing configuration files:
	
	I0717 17:29:42.529463   32725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 17:29:42.538875   32725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 17:29:42.538935   32725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 17:29:42.547614   32725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 17:29:42.555978   32725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 17:29:42.556042   32725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 17:29:42.564677   32725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 17:29:42.573054   32725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 17:29:42.573147   32725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 17:29:42.582014   32725 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 17:29:42.590217   32725 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 17:29:42.590268   32725 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 17:29:42.598706   32725 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 17:29:42.820351   32725 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 17:29:53.880142   32725 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 17:29:53.880255   32725 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 17:29:53.880376   32725 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 17:29:53.880488   32725 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 17:29:53.880610   32725 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 17:29:53.880732   32725 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 17:29:53.882105   32725 out.go:204]   - Generating certificates and keys ...
	I0717 17:29:53.882181   32725 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 17:29:53.882251   32725 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 17:29:53.882331   32725 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 17:29:53.882432   32725 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 17:29:53.882528   32725 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 17:29:53.882603   32725 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 17:29:53.882689   32725 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 17:29:53.882811   32725 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-174628 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0717 17:29:53.882860   32725 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 17:29:53.882975   32725 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-174628 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0717 17:29:53.883050   32725 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 17:29:53.883123   32725 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 17:29:53.883183   32725 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 17:29:53.883251   32725 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 17:29:53.883318   32725 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 17:29:53.883372   32725 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 17:29:53.883435   32725 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 17:29:53.883516   32725 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 17:29:53.883568   32725 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 17:29:53.883639   32725 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 17:29:53.883697   32725 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 17:29:53.885138   32725 out.go:204]   - Booting up control plane ...
	I0717 17:29:53.885232   32725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 17:29:53.885326   32725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 17:29:53.885423   32725 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 17:29:53.885542   32725 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 17:29:53.885635   32725 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 17:29:53.885670   32725 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 17:29:53.885778   32725 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 17:29:53.885840   32725 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 17:29:53.885889   32725 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.398722ms
	I0717 17:29:53.885968   32725 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 17:29:53.886030   32725 kubeadm.go:310] [api-check] The API server is healthy after 5.960239011s
	I0717 17:29:53.886116   32725 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 17:29:53.886220   32725 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 17:29:53.886275   32725 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 17:29:53.886420   32725 kubeadm.go:310] [mark-control-plane] Marking the node ha-174628 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 17:29:53.886468   32725 kubeadm.go:310] [bootstrap-token] Using token: wck5nb.rxemfngs4xdsbvfr
	I0717 17:29:53.887717   32725 out.go:204]   - Configuring RBAC rules ...
	I0717 17:29:53.887821   32725 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 17:29:53.887899   32725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 17:29:53.888015   32725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 17:29:53.888145   32725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 17:29:53.888284   32725 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 17:29:53.888378   32725 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 17:29:53.888511   32725 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 17:29:53.888575   32725 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 17:29:53.888641   32725 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 17:29:53.888649   32725 kubeadm.go:310] 
	I0717 17:29:53.888717   32725 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 17:29:53.888725   32725 kubeadm.go:310] 
	I0717 17:29:53.888786   32725 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 17:29:53.888792   32725 kubeadm.go:310] 
	I0717 17:29:53.888820   32725 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 17:29:53.888871   32725 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 17:29:53.888914   32725 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 17:29:53.888919   32725 kubeadm.go:310] 
	I0717 17:29:53.889001   32725 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 17:29:53.889014   32725 kubeadm.go:310] 
	I0717 17:29:53.889057   32725 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 17:29:53.889063   32725 kubeadm.go:310] 
	I0717 17:29:53.889114   32725 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 17:29:53.889192   32725 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 17:29:53.889284   32725 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 17:29:53.889293   32725 kubeadm.go:310] 
	I0717 17:29:53.889399   32725 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 17:29:53.889464   32725 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 17:29:53.889470   32725 kubeadm.go:310] 
	I0717 17:29:53.889542   32725 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wck5nb.rxemfngs4xdsbvfr \
	I0717 17:29:53.889637   32725 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 17:29:53.889658   32725 kubeadm.go:310] 	--control-plane 
	I0717 17:29:53.889663   32725 kubeadm.go:310] 
	I0717 17:29:53.889733   32725 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 17:29:53.889739   32725 kubeadm.go:310] 
	I0717 17:29:53.889826   32725 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wck5nb.rxemfngs4xdsbvfr \
	I0717 17:29:53.889948   32725 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 17:29:53.889963   32725 cni.go:84] Creating CNI manager for ""
	I0717 17:29:53.889968   32725 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0717 17:29:53.891370   32725 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 17:29:53.892517   32725 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 17:29:53.897907   32725 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.2/kubectl ...
	I0717 17:29:53.897927   32725 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 17:29:53.915259   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 17:29:54.226731   32725 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 17:29:54.226803   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:54.226833   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174628 minikube.k8s.io/updated_at=2024_07_17T17_29_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=ha-174628 minikube.k8s.io/primary=true
	I0717 17:29:54.244180   32725 ops.go:34] apiserver oom_adj: -16
	I0717 17:29:54.402128   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:54.902588   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:55.403044   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:55.903030   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:56.402632   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:56.902925   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:57.403211   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:57.902997   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:58.402988   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:58.902391   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:59.402474   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:29:59.902649   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:00.402945   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:00.902998   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:01.402544   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:01.902870   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:02.402672   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:02.902952   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:03.402546   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:03.902512   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:04.402639   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:04.902292   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:05.402402   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:05.903191   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 17:30:05.991444   32725 kubeadm.go:1113] duration metric: took 11.764692753s to wait for elevateKubeSystemPrivileges
	I0717 17:30:05.991485   32725 kubeadm.go:394] duration metric: took 23.539184464s to StartCluster
	I0717 17:30:05.991509   32725 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:30:05.991584   32725 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:30:05.992296   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:30:05.992512   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 17:30:05.992540   32725 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 17:30:05.992585   32725 addons.go:69] Setting storage-provisioner=true in profile "ha-174628"
	I0717 17:30:05.992615   32725 addons.go:234] Setting addon storage-provisioner=true in "ha-174628"
	I0717 17:30:05.992523   32725 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:30:05.992628   32725 addons.go:69] Setting default-storageclass=true in profile "ha-174628"
	I0717 17:30:05.992639   32725 start.go:241] waiting for startup goroutines ...
	I0717 17:30:05.992643   32725 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:30:05.992660   32725 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-174628"
	I0717 17:30:05.992726   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:30:05.993053   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:05.993084   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:05.993084   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:05.993106   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:06.008381   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33189
	I0717 17:30:06.008393   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38795
	I0717 17:30:06.008836   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:06.008956   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:06.009351   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:06.009376   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:06.009469   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:06.009494   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:06.009713   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:06.009806   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:06.009881   32725 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:30:06.010383   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:06.010415   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:06.012035   32725 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:30:06.012365   32725 kapi.go:59] client config for ha-174628: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.crt", KeyFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key", CAFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 17:30:06.012853   32725 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 17:30:06.013001   32725 addons.go:234] Setting addon default-storageclass=true in "ha-174628"
	I0717 17:30:06.013034   32725 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:30:06.013276   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:06.013299   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:06.025434   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0717 17:30:06.025898   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:06.026399   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:06.026424   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:06.026805   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:06.026990   32725 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:30:06.027770   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0717 17:30:06.028205   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:06.028641   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:06.028660   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:06.028821   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:30:06.028996   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:06.029418   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:06.029451   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:06.031030   32725 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 17:30:06.032429   32725 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 17:30:06.032450   32725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 17:30:06.032470   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:30:06.035519   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:30:06.036037   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:30:06.036074   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:30:06.036167   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:30:06.036401   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:30:06.036617   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:30:06.036775   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:30:06.046443   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0717 17:30:06.046944   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:06.047416   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:06.047439   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:06.047776   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:06.047965   32725 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:30:06.049498   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:30:06.049688   32725 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 17:30:06.049704   32725 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 17:30:06.049722   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:30:06.052710   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:30:06.053337   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:30:06.053364   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:30:06.053529   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:30:06.053684   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:30:06.053842   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:30:06.053999   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:30:06.118148   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 17:30:06.249823   32725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 17:30:06.271489   32725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 17:30:06.632818   32725 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0717 17:30:06.906746   32725 main.go:141] libmachine: Making call to close driver server
	I0717 17:30:06.906776   32725 main.go:141] libmachine: (ha-174628) Calling .Close
	I0717 17:30:06.906820   32725 main.go:141] libmachine: Making call to close driver server
	I0717 17:30:06.906975   32725 main.go:141] libmachine: (ha-174628) Calling .Close
	I0717 17:30:06.907456   32725 main.go:141] libmachine: (ha-174628) DBG | Closing plugin on server side
	I0717 17:30:06.907549   32725 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:30:06.907561   32725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:30:06.907579   32725 main.go:141] libmachine: Making call to close driver server
	I0717 17:30:06.907587   32725 main.go:141] libmachine: (ha-174628) Calling .Close
	I0717 17:30:06.907975   32725 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:30:06.908006   32725 main.go:141] libmachine: (ha-174628) DBG | Closing plugin on server side
	I0717 17:30:06.908028   32725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:30:06.908039   32725 main.go:141] libmachine: Making call to close driver server
	I0717 17:30:06.908053   32725 main.go:141] libmachine: (ha-174628) Calling .Close
	I0717 17:30:06.908297   32725 main.go:141] libmachine: (ha-174628) DBG | Closing plugin on server side
	I0717 17:30:06.908369   32725 main.go:141] libmachine: (ha-174628) DBG | Closing plugin on server side
	I0717 17:30:06.908412   32725 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:30:06.908420   32725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:30:06.908425   32725 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:30:06.908441   32725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:30:06.908535   32725 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0717 17:30:06.908547   32725 round_trippers.go:469] Request Headers:
	I0717 17:30:06.908564   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:30:06.908578   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:30:06.919354   32725 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 17:30:06.919857   32725 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0717 17:30:06.919870   32725 round_trippers.go:469] Request Headers:
	I0717 17:30:06.919877   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:30:06.919880   32725 round_trippers.go:473]     Content-Type: application/json
	I0717 17:30:06.919883   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:30:06.922413   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:30:06.922543   32725 main.go:141] libmachine: Making call to close driver server
	I0717 17:30:06.922554   32725 main.go:141] libmachine: (ha-174628) Calling .Close
	I0717 17:30:06.922830   32725 main.go:141] libmachine: Successfully made call to close driver server
	I0717 17:30:06.922848   32725 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 17:30:06.925570   32725 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 17:30:06.927276   32725 addons.go:510] duration metric: took 934.730792ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 17:30:06.927316   32725 start.go:246] waiting for cluster config update ...
	I0717 17:30:06.927331   32725 start.go:255] writing updated cluster config ...
	I0717 17:30:06.929157   32725 out.go:177] 
	I0717 17:30:06.930559   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:30:06.930658   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:30:06.932287   32725 out.go:177] * Starting "ha-174628-m02" control-plane node in "ha-174628" cluster
	I0717 17:30:06.933735   32725 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:30:06.933762   32725 cache.go:56] Caching tarball of preloaded images
	I0717 17:30:06.933852   32725 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 17:30:06.933872   32725 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 17:30:06.933944   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:30:06.934109   32725 start.go:360] acquireMachinesLock for ha-174628-m02: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 17:30:06.934162   32725 start.go:364] duration metric: took 32.269µs to acquireMachinesLock for "ha-174628-m02"
	I0717 17:30:06.934186   32725 start.go:93] Provisioning new machine with config: &{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:30:06.934266   32725 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0717 17:30:06.935760   32725 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 17:30:06.935850   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:06.935883   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:06.950705   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42145
	I0717 17:30:06.951110   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:06.951621   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:06.951637   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:06.951971   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:06.952163   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetMachineName
	I0717 17:30:06.952334   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:06.952481   32725 start.go:159] libmachine.API.Create for "ha-174628" (driver="kvm2")
	I0717 17:30:06.952503   32725 client.go:168] LocalClient.Create starting
	I0717 17:30:06.952538   32725 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 17:30:06.952574   32725 main.go:141] libmachine: Decoding PEM data...
	I0717 17:30:06.952594   32725 main.go:141] libmachine: Parsing certificate...
	I0717 17:30:06.952651   32725 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 17:30:06.952669   32725 main.go:141] libmachine: Decoding PEM data...
	I0717 17:30:06.952680   32725 main.go:141] libmachine: Parsing certificate...
	I0717 17:30:06.952698   32725 main.go:141] libmachine: Running pre-create checks...
	I0717 17:30:06.952706   32725 main.go:141] libmachine: (ha-174628-m02) Calling .PreCreateCheck
	I0717 17:30:06.952893   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetConfigRaw
	I0717 17:30:06.953325   32725 main.go:141] libmachine: Creating machine...
	I0717 17:30:06.953341   32725 main.go:141] libmachine: (ha-174628-m02) Calling .Create
	I0717 17:30:06.953450   32725 main.go:141] libmachine: (ha-174628-m02) Creating KVM machine...
	I0717 17:30:06.954437   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found existing default KVM network
	I0717 17:30:06.954588   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found existing private KVM network mk-ha-174628
	I0717 17:30:06.954768   32725 main.go:141] libmachine: (ha-174628-m02) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02 ...
	I0717 17:30:06.954796   32725 main.go:141] libmachine: (ha-174628-m02) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 17:30:06.954814   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:06.954714   33086 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:30:06.954917   32725 main.go:141] libmachine: (ha-174628-m02) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 17:30:07.182542   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:07.182425   33086 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa...
	I0717 17:30:07.521008   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:07.520862   33086 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/ha-174628-m02.rawdisk...
	I0717 17:30:07.521037   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Writing magic tar header
	I0717 17:30:07.521049   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Writing SSH key tar header
	I0717 17:30:07.521065   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:07.520995   33086 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02 ...
	I0717 17:30:07.521084   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02
	I0717 17:30:07.521168   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 17:30:07.521196   32725 main.go:141] libmachine: (ha-174628-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02 (perms=drwx------)
	I0717 17:30:07.521207   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:30:07.521226   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 17:30:07.521233   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 17:30:07.521240   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home/jenkins
	I0717 17:30:07.521250   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Checking permissions on dir: /home
	I0717 17:30:07.521261   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Skipping /home - not owner
	I0717 17:30:07.521289   32725 main.go:141] libmachine: (ha-174628-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 17:30:07.521306   32725 main.go:141] libmachine: (ha-174628-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 17:30:07.521317   32725 main.go:141] libmachine: (ha-174628-m02) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 17:30:07.521329   32725 main.go:141] libmachine: (ha-174628-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 17:30:07.521339   32725 main.go:141] libmachine: (ha-174628-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 17:30:07.521344   32725 main.go:141] libmachine: (ha-174628-m02) Creating domain...
	I0717 17:30:07.522281   32725 main.go:141] libmachine: (ha-174628-m02) define libvirt domain using xml: 
	I0717 17:30:07.522306   32725 main.go:141] libmachine: (ha-174628-m02) <domain type='kvm'>
	I0717 17:30:07.522318   32725 main.go:141] libmachine: (ha-174628-m02)   <name>ha-174628-m02</name>
	I0717 17:30:07.522330   32725 main.go:141] libmachine: (ha-174628-m02)   <memory unit='MiB'>2200</memory>
	I0717 17:30:07.522344   32725 main.go:141] libmachine: (ha-174628-m02)   <vcpu>2</vcpu>
	I0717 17:30:07.522356   32725 main.go:141] libmachine: (ha-174628-m02)   <features>
	I0717 17:30:07.522384   32725 main.go:141] libmachine: (ha-174628-m02)     <acpi/>
	I0717 17:30:07.522408   32725 main.go:141] libmachine: (ha-174628-m02)     <apic/>
	I0717 17:30:07.522418   32725 main.go:141] libmachine: (ha-174628-m02)     <pae/>
	I0717 17:30:07.522428   32725 main.go:141] libmachine: (ha-174628-m02)     
	I0717 17:30:07.522438   32725 main.go:141] libmachine: (ha-174628-m02)   </features>
	I0717 17:30:07.522451   32725 main.go:141] libmachine: (ha-174628-m02)   <cpu mode='host-passthrough'>
	I0717 17:30:07.522462   32725 main.go:141] libmachine: (ha-174628-m02)   
	I0717 17:30:07.522470   32725 main.go:141] libmachine: (ha-174628-m02)   </cpu>
	I0717 17:30:07.522476   32725 main.go:141] libmachine: (ha-174628-m02)   <os>
	I0717 17:30:07.522484   32725 main.go:141] libmachine: (ha-174628-m02)     <type>hvm</type>
	I0717 17:30:07.522492   32725 main.go:141] libmachine: (ha-174628-m02)     <boot dev='cdrom'/>
	I0717 17:30:07.522497   32725 main.go:141] libmachine: (ha-174628-m02)     <boot dev='hd'/>
	I0717 17:30:07.522506   32725 main.go:141] libmachine: (ha-174628-m02)     <bootmenu enable='no'/>
	I0717 17:30:07.522519   32725 main.go:141] libmachine: (ha-174628-m02)   </os>
	I0717 17:30:07.522538   32725 main.go:141] libmachine: (ha-174628-m02)   <devices>
	I0717 17:30:07.522549   32725 main.go:141] libmachine: (ha-174628-m02)     <disk type='file' device='cdrom'>
	I0717 17:30:07.522567   32725 main.go:141] libmachine: (ha-174628-m02)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/boot2docker.iso'/>
	I0717 17:30:07.522578   32725 main.go:141] libmachine: (ha-174628-m02)       <target dev='hdc' bus='scsi'/>
	I0717 17:30:07.522588   32725 main.go:141] libmachine: (ha-174628-m02)       <readonly/>
	I0717 17:30:07.522601   32725 main.go:141] libmachine: (ha-174628-m02)     </disk>
	I0717 17:30:07.522611   32725 main.go:141] libmachine: (ha-174628-m02)     <disk type='file' device='disk'>
	I0717 17:30:07.522628   32725 main.go:141] libmachine: (ha-174628-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 17:30:07.522643   32725 main.go:141] libmachine: (ha-174628-m02)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/ha-174628-m02.rawdisk'/>
	I0717 17:30:07.522673   32725 main.go:141] libmachine: (ha-174628-m02)       <target dev='hda' bus='virtio'/>
	I0717 17:30:07.522702   32725 main.go:141] libmachine: (ha-174628-m02)     </disk>
	I0717 17:30:07.522716   32725 main.go:141] libmachine: (ha-174628-m02)     <interface type='network'>
	I0717 17:30:07.522727   32725 main.go:141] libmachine: (ha-174628-m02)       <source network='mk-ha-174628'/>
	I0717 17:30:07.522771   32725 main.go:141] libmachine: (ha-174628-m02)       <model type='virtio'/>
	I0717 17:30:07.522783   32725 main.go:141] libmachine: (ha-174628-m02)     </interface>
	I0717 17:30:07.522794   32725 main.go:141] libmachine: (ha-174628-m02)     <interface type='network'>
	I0717 17:30:07.522804   32725 main.go:141] libmachine: (ha-174628-m02)       <source network='default'/>
	I0717 17:30:07.522815   32725 main.go:141] libmachine: (ha-174628-m02)       <model type='virtio'/>
	I0717 17:30:07.522825   32725 main.go:141] libmachine: (ha-174628-m02)     </interface>
	I0717 17:30:07.522835   32725 main.go:141] libmachine: (ha-174628-m02)     <serial type='pty'>
	I0717 17:30:07.522845   32725 main.go:141] libmachine: (ha-174628-m02)       <target port='0'/>
	I0717 17:30:07.522873   32725 main.go:141] libmachine: (ha-174628-m02)     </serial>
	I0717 17:30:07.522894   32725 main.go:141] libmachine: (ha-174628-m02)     <console type='pty'>
	I0717 17:30:07.522907   32725 main.go:141] libmachine: (ha-174628-m02)       <target type='serial' port='0'/>
	I0717 17:30:07.522918   32725 main.go:141] libmachine: (ha-174628-m02)     </console>
	I0717 17:30:07.522930   32725 main.go:141] libmachine: (ha-174628-m02)     <rng model='virtio'>
	I0717 17:30:07.522944   32725 main.go:141] libmachine: (ha-174628-m02)       <backend model='random'>/dev/random</backend>
	I0717 17:30:07.522954   32725 main.go:141] libmachine: (ha-174628-m02)     </rng>
	I0717 17:30:07.522963   32725 main.go:141] libmachine: (ha-174628-m02)     
	I0717 17:30:07.522974   32725 main.go:141] libmachine: (ha-174628-m02)     
	I0717 17:30:07.522983   32725 main.go:141] libmachine: (ha-174628-m02)   </devices>
	I0717 17:30:07.522993   32725 main.go:141] libmachine: (ha-174628-m02) </domain>
	I0717 17:30:07.523003   32725 main.go:141] libmachine: (ha-174628-m02) 
	I0717 17:30:07.529602   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:6e:7d:7d in network default
	I0717 17:30:07.530175   32725 main.go:141] libmachine: (ha-174628-m02) Ensuring networks are active...
	I0717 17:30:07.530198   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:07.530797   32725 main.go:141] libmachine: (ha-174628-m02) Ensuring network default is active
	I0717 17:30:07.531120   32725 main.go:141] libmachine: (ha-174628-m02) Ensuring network mk-ha-174628 is active
	I0717 17:30:07.531478   32725 main.go:141] libmachine: (ha-174628-m02) Getting domain xml...
	I0717 17:30:07.532194   32725 main.go:141] libmachine: (ha-174628-m02) Creating domain...
	I0717 17:30:08.735024   32725 main.go:141] libmachine: (ha-174628-m02) Waiting to get IP...
	I0717 17:30:08.735908   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:08.736309   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:08.736374   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:08.736278   33086 retry.go:31] will retry after 254.757459ms: waiting for machine to come up
	I0717 17:30:08.992936   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:08.993338   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:08.993368   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:08.993300   33086 retry.go:31] will retry after 349.817685ms: waiting for machine to come up
	I0717 17:30:09.345304   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:09.346035   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:09.346059   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:09.345976   33086 retry.go:31] will retry after 431.850456ms: waiting for machine to come up
	I0717 17:30:09.779407   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:09.779903   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:09.779929   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:09.779875   33086 retry.go:31] will retry after 521.386512ms: waiting for machine to come up
	I0717 17:30:10.303006   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:10.303441   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:10.303462   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:10.303404   33086 retry.go:31] will retry after 654.88693ms: waiting for machine to come up
	I0717 17:30:10.960250   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:10.960665   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:10.960695   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:10.960615   33086 retry.go:31] will retry after 812.663457ms: waiting for machine to come up
	I0717 17:30:11.774425   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:11.774828   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:11.774848   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:11.774757   33086 retry.go:31] will retry after 909.070997ms: waiting for machine to come up
	I0717 17:30:12.684873   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:12.685282   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:12.685305   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:12.685240   33086 retry.go:31] will retry after 1.4060659s: waiting for machine to come up
	I0717 17:30:14.093810   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:14.094221   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:14.094246   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:14.094188   33086 retry.go:31] will retry after 1.617063869s: waiting for machine to come up
	I0717 17:30:15.714144   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:15.714673   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:15.714699   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:15.714626   33086 retry.go:31] will retry after 1.560364715s: waiting for machine to come up
	I0717 17:30:17.276818   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:17.277355   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:17.277380   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:17.277305   33086 retry.go:31] will retry after 1.983112853s: waiting for machine to come up
	I0717 17:30:19.263384   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:19.263769   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:19.263792   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:19.263735   33086 retry.go:31] will retry after 2.937547634s: waiting for machine to come up
	I0717 17:30:22.202387   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:22.202878   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find current IP address of domain ha-174628-m02 in network mk-ha-174628
	I0717 17:30:22.202902   32725 main.go:141] libmachine: (ha-174628-m02) DBG | I0717 17:30:22.202827   33086 retry.go:31] will retry after 4.241030651s: waiting for machine to come up
	I0717 17:30:26.445900   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.446246   32725 main.go:141] libmachine: (ha-174628-m02) Found IP for machine: 192.168.39.97
	I0717 17:30:26.446268   32725 main.go:141] libmachine: (ha-174628-m02) Reserving static IP address...
	I0717 17:30:26.446279   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has current primary IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.446684   32725 main.go:141] libmachine: (ha-174628-m02) DBG | unable to find host DHCP lease matching {name: "ha-174628-m02", mac: "52:54:00:26:10:53", ip: "192.168.39.97"} in network mk-ha-174628
	I0717 17:30:26.518198   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Getting to WaitForSSH function...
	I0717 17:30:26.518220   32725 main.go:141] libmachine: (ha-174628-m02) Reserved static IP address: 192.168.39.97
	I0717 17:30:26.518233   32725 main.go:141] libmachine: (ha-174628-m02) Waiting for SSH to be available...
	I0717 17:30:26.520920   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.521410   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:minikube Clientid:01:52:54:00:26:10:53}
	I0717 17:30:26.521444   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.521557   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Using SSH client type: external
	I0717 17:30:26.521586   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa (-rw-------)
	I0717 17:30:26.521615   32725 main.go:141] libmachine: (ha-174628-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 17:30:26.521630   32725 main.go:141] libmachine: (ha-174628-m02) DBG | About to run SSH command:
	I0717 17:30:26.521644   32725 main.go:141] libmachine: (ha-174628-m02) DBG | exit 0
	I0717 17:30:26.648877   32725 main.go:141] libmachine: (ha-174628-m02) DBG | SSH cmd err, output: <nil>: 
	I0717 17:30:26.649148   32725 main.go:141] libmachine: (ha-174628-m02) KVM machine creation complete!
	I0717 17:30:26.649484   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetConfigRaw
	I0717 17:30:26.650078   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:26.650244   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:26.650403   32725 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 17:30:26.650416   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetState
	I0717 17:30:26.651631   32725 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 17:30:26.651646   32725 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 17:30:26.651652   32725 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 17:30:26.651657   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:26.653793   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.654110   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:26.654135   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.654284   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:26.654463   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:26.654621   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:26.654759   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:26.654923   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:30:26.655115   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 17:30:26.655126   32725 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 17:30:26.768026   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:30:26.768087   32725 main.go:141] libmachine: Detecting the provisioner...
	I0717 17:30:26.768100   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:26.770745   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.771069   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:26.771094   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.771291   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:26.771492   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:26.771685   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:26.771830   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:26.772002   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:30:26.772190   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 17:30:26.772204   32725 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 17:30:26.881251   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 17:30:26.881312   32725 main.go:141] libmachine: found compatible host: buildroot
	I0717 17:30:26.881322   32725 main.go:141] libmachine: Provisioning with buildroot...
	I0717 17:30:26.881332   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetMachineName
	I0717 17:30:26.881567   32725 buildroot.go:166] provisioning hostname "ha-174628-m02"
	I0717 17:30:26.881593   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetMachineName
	I0717 17:30:26.881754   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:26.884232   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.884579   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:26.884606   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:26.884707   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:26.884877   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:26.885132   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:26.885331   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:26.885482   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:30:26.885643   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 17:30:26.885653   32725 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174628-m02 && echo "ha-174628-m02" | sudo tee /etc/hostname
	I0717 17:30:27.009565   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174628-m02
	
	I0717 17:30:27.009597   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.012536   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.012896   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.012917   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.013166   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.013342   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.013521   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.013661   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.013798   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:30:27.013959   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 17:30:27.013981   32725 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174628-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174628-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174628-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 17:30:27.134434   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:30:27.134457   32725 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 17:30:27.134476   32725 buildroot.go:174] setting up certificates
	I0717 17:30:27.134488   32725 provision.go:84] configureAuth start
	I0717 17:30:27.134499   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetMachineName
	I0717 17:30:27.134767   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:30:27.137175   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.137604   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.137630   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.137738   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.139589   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.139907   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.139931   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.140043   32725 provision.go:143] copyHostCerts
	I0717 17:30:27.140079   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:30:27.140118   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 17:30:27.140128   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:30:27.140208   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 17:30:27.140307   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:30:27.140347   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 17:30:27.140357   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:30:27.140394   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 17:30:27.140470   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:30:27.140490   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 17:30:27.140496   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:30:27.140531   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 17:30:27.140613   32725 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.ha-174628-m02 san=[127.0.0.1 192.168.39.97 ha-174628-m02 localhost minikube]
	I0717 17:30:27.270219   32725 provision.go:177] copyRemoteCerts
	I0717 17:30:27.270272   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 17:30:27.270295   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.272729   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.273036   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.273057   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.273287   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.273469   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.273636   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.273752   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	I0717 17:30:27.362923   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 17:30:27.363014   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 17:30:27.388021   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 17:30:27.388092   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 17:30:27.409698   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 17:30:27.409775   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 17:30:27.431771   32725 provision.go:87] duration metric: took 297.270251ms to configureAuth
	I0717 17:30:27.431801   32725 buildroot.go:189] setting minikube options for container-runtime
	I0717 17:30:27.431978   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:30:27.432045   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.434814   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.435235   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.435262   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.435458   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.435646   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.435843   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.435965   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.436086   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:30:27.436267   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 17:30:27.436283   32725 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 17:30:27.697507   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 17:30:27.697534   32725 main.go:141] libmachine: Checking connection to Docker...
	I0717 17:30:27.697542   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetURL
	I0717 17:30:27.698674   32725 main.go:141] libmachine: (ha-174628-m02) DBG | Using libvirt version 6000000
	I0717 17:30:27.700401   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.700707   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.700744   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.700814   32725 main.go:141] libmachine: Docker is up and running!
	I0717 17:30:27.700828   32725 main.go:141] libmachine: Reticulating splines...
	I0717 17:30:27.700836   32725 client.go:171] duration metric: took 20.748326231s to LocalClient.Create
	I0717 17:30:27.700860   32725 start.go:167] duration metric: took 20.748380298s to libmachine.API.Create "ha-174628"
	I0717 17:30:27.700871   32725 start.go:293] postStartSetup for "ha-174628-m02" (driver="kvm2")
	I0717 17:30:27.700884   32725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 17:30:27.700900   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:27.701122   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 17:30:27.701143   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.702855   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.703123   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.703149   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.703262   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.703445   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.703589   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.703743   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	I0717 17:30:27.787294   32725 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 17:30:27.791263   32725 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 17:30:27.791287   32725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 17:30:27.791350   32725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 17:30:27.791443   32725 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 17:30:27.791454   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /etc/ssl/certs/215772.pem
	I0717 17:30:27.791556   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 17:30:27.800213   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:30:27.822076   32725 start.go:296] duration metric: took 121.191213ms for postStartSetup
	I0717 17:30:27.822129   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetConfigRaw
	I0717 17:30:27.822728   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:30:27.825244   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.825700   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.825728   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.825990   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:30:27.826187   32725 start.go:128] duration metric: took 20.891909418s to createHost
	I0717 17:30:27.826210   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.828484   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.828854   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.828880   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.829016   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.829195   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.829361   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.829464   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.829627   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:30:27.829819   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.97 22 <nil> <nil>}
	I0717 17:30:27.829831   32725 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 17:30:27.937280   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237427.910092037
	
	I0717 17:30:27.937304   32725 fix.go:216] guest clock: 1721237427.910092037
	I0717 17:30:27.937311   32725 fix.go:229] Guest: 2024-07-17 17:30:27.910092037 +0000 UTC Remote: 2024-07-17 17:30:27.826199284 +0000 UTC m=+71.533125181 (delta=83.892753ms)
	I0717 17:30:27.937325   32725 fix.go:200] guest clock delta is within tolerance: 83.892753ms
	I0717 17:30:27.937330   32725 start.go:83] releasing machines lock for "ha-174628-m02", held for 21.003156575s
	I0717 17:30:27.937350   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:27.937657   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:30:27.940144   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.940475   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.940501   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.942679   32725 out.go:177] * Found network options:
	I0717 17:30:27.944029   32725 out.go:177]   - NO_PROXY=192.168.39.100
	W0717 17:30:27.945336   32725 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 17:30:27.945371   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:27.945891   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:27.946053   32725 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:30:27.946121   32725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 17:30:27.946150   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	W0717 17:30:27.946221   32725 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 17:30:27.946296   32725 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 17:30:27.946318   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:30:27.948894   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.949231   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.949259   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.949280   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.949462   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.949629   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.949711   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:27.949733   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:27.949796   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.949877   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:30:27.949964   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	I0717 17:30:27.950032   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:30:27.950156   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:30:27.950283   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	I0717 17:30:28.183059   32725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 17:30:28.188754   32725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 17:30:28.188826   32725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 17:30:28.203181   32725 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 17:30:28.203205   32725 start.go:495] detecting cgroup driver to use...
	I0717 17:30:28.203275   32725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 17:30:28.218372   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 17:30:28.231086   32725 docker.go:217] disabling cri-docker service (if available) ...
	I0717 17:30:28.231151   32725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 17:30:28.243630   32725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 17:30:28.256287   32725 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 17:30:28.368408   32725 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 17:30:28.504467   32725 docker.go:233] disabling docker service ...
	I0717 17:30:28.504549   32725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 17:30:28.517898   32725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 17:30:28.529703   32725 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 17:30:28.668306   32725 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 17:30:28.772095   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 17:30:28.785276   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 17:30:28.802030   32725 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 17:30:28.802113   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.811581   32725 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 17:30:28.811658   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.821646   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.830882   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.840164   32725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 17:30:28.849652   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.858642   32725 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.875466   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:30:28.884844   32725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 17:30:28.893426   32725 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 17:30:28.893476   32725 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 17:30:28.905941   32725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 17:30:28.914900   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:30:29.028588   32725 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 17:30:29.158553   32725 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 17:30:29.158626   32725 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 17:30:29.163519   32725 start.go:563] Will wait 60s for crictl version
	I0717 17:30:29.163631   32725 ssh_runner.go:195] Run: which crictl
	I0717 17:30:29.167499   32725 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 17:30:29.208428   32725 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 17:30:29.208514   32725 ssh_runner.go:195] Run: crio --version
	I0717 17:30:29.237255   32725 ssh_runner.go:195] Run: crio --version
	I0717 17:30:29.267886   32725 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 17:30:29.269253   32725 out.go:177]   - env NO_PROXY=192.168.39.100
	I0717 17:30:29.270333   32725 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:30:29.273419   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:29.273804   32725 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:30:20 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:30:29.273833   32725 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:30:29.274006   32725 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 17:30:29.277914   32725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:30:29.290713   32725 mustload.go:65] Loading cluster: ha-174628
	I0717 17:30:29.290872   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:30:29.291103   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:29.291129   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:29.305889   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
	I0717 17:30:29.306305   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:29.306805   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:29.306827   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:29.307152   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:29.307406   32725 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:30:29.308984   32725 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:30:29.309275   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:30:29.309322   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:30:29.324357   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41527
	I0717 17:30:29.324833   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:30:29.325308   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:30:29.325329   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:30:29.325634   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:30:29.325821   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:30:29.325980   32725 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628 for IP: 192.168.39.97
	I0717 17:30:29.325992   32725 certs.go:194] generating shared ca certs ...
	I0717 17:30:29.326011   32725 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:30:29.326139   32725 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 17:30:29.326189   32725 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 17:30:29.326202   32725 certs.go:256] generating profile certs ...
	I0717 17:30:29.326292   32725 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key
	I0717 17:30:29.326327   32725 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.79966bc2
	I0717 17:30:29.326349   32725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.79966bc2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.97 192.168.39.254]
	I0717 17:30:29.599890   32725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.79966bc2 ...
	I0717 17:30:29.599919   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.79966bc2: {Name:mk4aa20f793a6c7a0fef2d3ef9b599c41575e148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:30:29.600096   32725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.79966bc2 ...
	I0717 17:30:29.600112   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.79966bc2: {Name:mk73dd4d067123d7bffcad1ee9aecc3a37f46efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:30:29.600206   32725 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.79966bc2 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt
	I0717 17:30:29.600356   32725 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.79966bc2 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key
	I0717 17:30:29.600517   32725 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key
	I0717 17:30:29.600533   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 17:30:29.600550   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 17:30:29.600565   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 17:30:29.600590   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 17:30:29.600607   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 17:30:29.600622   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 17:30:29.600641   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 17:30:29.600656   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 17:30:29.600716   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 17:30:29.600755   32725 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 17:30:29.600768   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 17:30:29.600803   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 17:30:29.600835   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 17:30:29.600866   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 17:30:29.600920   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:30:29.600970   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem -> /usr/share/ca-certificates/21577.pem
	I0717 17:30:29.600992   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /usr/share/ca-certificates/215772.pem
	I0717 17:30:29.601010   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:30:29.601048   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:30:29.603806   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:30:29.604236   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:30:29.604263   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:30:29.604392   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:30:29.604603   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:30:29.604751   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:30:29.604869   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:30:29.673312   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 17:30:29.678206   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 17:30:29.688082   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 17:30:29.691889   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0717 17:30:29.701045   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 17:30:29.704724   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 17:30:29.714148   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 17:30:29.718640   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0717 17:30:29.728611   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 17:30:29.732246   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 17:30:29.742446   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 17:30:29.746417   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 17:30:29.756087   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 17:30:29.780885   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 17:30:29.803687   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 17:30:29.826532   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 17:30:29.849270   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 17:30:29.871848   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 17:30:29.895591   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 17:30:29.918963   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 17:30:29.940294   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 17:30:29.961794   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 17:30:29.984348   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 17:30:30.006158   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 17:30:30.021000   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0717 17:30:30.036035   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 17:30:30.051290   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0717 17:30:30.066585   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 17:30:30.082040   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 17:30:30.097282   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 17:30:30.112729   32725 ssh_runner.go:195] Run: openssl version
	I0717 17:30:30.118171   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 17:30:30.128383   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 17:30:30.132271   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 17:30:30.132326   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 17:30:30.138026   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 17:30:30.148061   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 17:30:30.158253   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 17:30:30.162047   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 17:30:30.162099   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 17:30:30.167040   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 17:30:30.177143   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 17:30:30.186721   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:30:30.190631   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:30:30.190677   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:30:30.195729   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 17:30:30.205315   32725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 17:30:30.209014   32725 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 17:30:30.209063   32725 kubeadm.go:934] updating node {m02 192.168.39.97 8443 v1.30.2 crio true true} ...
	I0717 17:30:30.209166   32725 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174628-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 17:30:30.209194   32725 kube-vip.go:115] generating kube-vip config ...
	I0717 17:30:30.209227   32725 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 17:30:30.224818   32725 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 17:30:30.224879   32725 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 17:30:30.224922   32725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 17:30:30.233315   32725 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 17:30:30.233361   32725 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 17:30:30.241815   32725 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 17:30:30.241839   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 17:30:30.241903   32725 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubelet
	I0717 17:30:30.241928   32725 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubeadm
	I0717 17:30:30.241906   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 17:30:30.245844   32725 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 17:30:30.245870   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 17:31:08.834001   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 17:31:08.834091   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 17:31:08.839777   32725 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 17:31:08.839819   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 17:31:43.865058   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:31:43.881611   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 17:31:43.881700   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 17:31:43.885823   32725 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 17:31:43.885858   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 17:31:44.227610   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 17:31:44.236593   32725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0717 17:31:44.251937   32725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 17:31:44.266902   32725 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 17:31:44.281671   32725 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 17:31:44.285240   32725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:31:44.296055   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:31:44.408090   32725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:31:44.423851   32725 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:31:44.424308   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:31:44.424362   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:31:44.439233   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46041
	I0717 17:31:44.439686   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:31:44.440169   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:31:44.440193   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:31:44.440606   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:31:44.440811   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:31:44.440988   32725 start.go:317] joinCluster: &{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:31:44.441111   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 17:31:44.441132   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:31:44.444032   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:31:44.444553   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:31:44.444575   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:31:44.444729   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:31:44.444908   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:31:44.445084   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:31:44.445221   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:31:44.599900   32725 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:31:44.599950   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token knhmpz.6rn9meqs7468hbpw --discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174628-m02 --control-plane --apiserver-advertise-address=192.168.39.97 --apiserver-bind-port=8443"
	I0717 17:32:06.001020   32725 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token knhmpz.6rn9meqs7468hbpw --discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174628-m02 --control-plane --apiserver-advertise-address=192.168.39.97 --apiserver-bind-port=8443": (21.401040933s)
	I0717 17:32:06.001063   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 17:32:06.440073   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174628-m02 minikube.k8s.io/updated_at=2024_07_17T17_32_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=ha-174628 minikube.k8s.io/primary=false
	I0717 17:32:06.560695   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174628-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 17:32:06.653563   32725 start.go:319] duration metric: took 22.212571193s to joinCluster
	I0717 17:32:06.653658   32725 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:32:06.653958   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:32:06.655049   32725 out.go:177] * Verifying Kubernetes components...
	I0717 17:32:06.656342   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:32:06.876327   32725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:32:06.918990   32725 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:32:06.919376   32725 kapi.go:59] client config for ha-174628: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.crt", KeyFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key", CAFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 17:32:06.919482   32725 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.100:8443
	I0717 17:32:06.919761   32725 node_ready.go:35] waiting up to 6m0s for node "ha-174628-m02" to be "Ready" ...
	I0717 17:32:06.919865   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:06.919876   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:06.919887   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:06.919897   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:06.930531   32725 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 17:32:07.420258   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:07.420280   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:07.420287   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:07.420291   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:07.425735   32725 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 17:32:07.920436   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:07.920460   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:07.920471   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:07.920476   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:07.927951   32725 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 17:32:08.419963   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:08.419986   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:08.419997   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:08.420001   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:08.423661   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:08.920790   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:08.920815   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:08.920826   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:08.920831   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:08.924676   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:08.925156   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:09.420493   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:09.420516   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:09.420524   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:09.420529   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:09.423670   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:09.920007   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:09.920027   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:09.920038   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:09.920043   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:10.027683   32725 round_trippers.go:574] Response Status: 200 OK in 107 milliseconds
	I0717 17:32:10.420452   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:10.420482   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:10.420493   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:10.420499   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:10.424082   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:10.920168   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:10.920188   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:10.920196   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:10.920202   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:10.923032   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:11.420207   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:11.420234   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:11.420244   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:11.420249   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:11.423870   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:11.424327   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:11.920146   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:11.920169   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:11.920179   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:11.920185   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:11.923481   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:12.420855   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:12.420876   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:12.420883   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:12.420889   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:12.423889   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:12.920626   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:12.920645   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:12.920657   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:12.920661   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:12.926695   32725 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 17:32:13.420156   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:13.420182   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:13.420190   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:13.420194   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:13.423208   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:13.920311   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:13.920337   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:13.920346   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:13.920351   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:13.923041   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:13.923545   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:14.420903   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:14.420926   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:14.420939   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:14.420955   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:14.424316   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:14.920976   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:14.921004   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:14.921012   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:14.921016   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:14.924059   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:15.420568   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:15.420591   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:15.420602   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:15.420608   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:15.423888   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:15.920083   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:15.920110   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:15.920119   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:15.920124   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:15.923607   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:15.924223   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:16.420345   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:16.420373   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:16.420384   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:16.420387   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:16.423368   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:16.920615   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:16.920635   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:16.920643   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:16.920646   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:16.923724   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:17.420234   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:17.420257   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:17.420268   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:17.420273   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:17.423467   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:17.920039   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:17.920061   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:17.920070   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:17.920079   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:17.923326   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:18.420955   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:18.420982   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:18.420991   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:18.420994   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:18.424015   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:18.424435   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:18.920864   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:18.920886   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:18.920897   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:18.920901   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:18.924265   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:19.420126   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:19.420147   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:19.420155   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:19.420160   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:19.423319   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:19.920559   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:19.920584   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:19.920593   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:19.920598   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:19.924134   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:20.419960   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:20.419980   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:20.419988   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:20.419992   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:20.423165   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:20.919934   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:20.919954   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:20.919962   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:20.919966   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:20.923306   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:20.923774   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:21.420249   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:21.420273   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:21.420281   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:21.420286   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:21.423157   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:21.920673   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:21.920694   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:21.920702   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:21.920706   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:21.924190   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:22.420227   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:22.420249   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:22.420257   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:22.420261   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:22.423629   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:22.920462   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:22.920495   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:22.920508   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:22.920516   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:22.923703   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:22.924251   32725 node_ready.go:53] node "ha-174628-m02" has status "Ready":"False"
	I0717 17:32:23.420726   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:23.420749   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.420760   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.420764   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.424452   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:23.920232   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:23.920254   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.920262   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.920266   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.923497   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:23.923952   32725 node_ready.go:49] node "ha-174628-m02" has status "Ready":"True"
	I0717 17:32:23.923968   32725 node_ready.go:38] duration metric: took 17.004183592s for node "ha-174628-m02" to be "Ready" ...
	I0717 17:32:23.923985   32725 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 17:32:23.924037   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:32:23.924048   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.924055   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.924058   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.928855   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:32:23.934963   32725 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ljjl7" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.935053   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ljjl7
	I0717 17:32:23.935067   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.935077   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.935084   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.937579   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.938300   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:23.938317   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.938328   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.938334   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.940511   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.941063   32725 pod_ready.go:92] pod "coredns-7db6d8ff4d-ljjl7" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:23.941082   32725 pod_ready.go:81] duration metric: took 6.095417ms for pod "coredns-7db6d8ff4d-ljjl7" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.941093   32725 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nb567" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.941149   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nb567
	I0717 17:32:23.941160   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.941170   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.941175   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.943542   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.944209   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:23.944299   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.944319   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.944331   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.946492   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.946939   32725 pod_ready.go:92] pod "coredns-7db6d8ff4d-nb567" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:23.946953   32725 pod_ready.go:81] duration metric: took 5.85384ms for pod "coredns-7db6d8ff4d-nb567" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.946960   32725 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.946998   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174628
	I0717 17:32:23.947005   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.947013   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.947021   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.948985   32725 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 17:32:23.949586   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:23.949602   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.949609   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.949613   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.951824   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.952412   32725 pod_ready.go:92] pod "etcd-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:23.952433   32725 pod_ready.go:81] duration metric: took 5.466483ms for pod "etcd-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.952444   32725 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.952497   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174628-m02
	I0717 17:32:23.952505   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.952512   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.952517   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.954704   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.955096   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:23.955107   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:23.955114   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:23.955118   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:23.957222   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:23.957579   32725 pod_ready.go:92] pod "etcd-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:23.957593   32725 pod_ready.go:81] duration metric: took 5.142703ms for pod "etcd-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:23.957605   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:24.121001   32725 request.go:629] Waited for 163.340264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628
	I0717 17:32:24.121098   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628
	I0717 17:32:24.121109   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:24.121121   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:24.121132   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:24.124584   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:24.320415   32725 request.go:629] Waited for 195.279708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:24.320516   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:24.320531   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:24.320544   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:24.320554   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:24.323699   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:24.324116   32725 pod_ready.go:92] pod "kube-apiserver-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:24.324134   32725 pod_ready.go:81] duration metric: took 366.520996ms for pod "kube-apiserver-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:24.324145   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:24.520636   32725 request.go:629] Waited for 196.429854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628-m02
	I0717 17:32:24.520721   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628-m02
	I0717 17:32:24.520733   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:24.520744   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:24.520752   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:24.523620   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:24.720686   32725 request.go:629] Waited for 196.346873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:24.720770   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:24.720781   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:24.720792   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:24.720801   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:24.724330   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:24.724759   32725 pod_ready.go:92] pod "kube-apiserver-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:24.724778   32725 pod_ready.go:81] duration metric: took 400.626087ms for pod "kube-apiserver-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:24.724790   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:24.921008   32725 request.go:629] Waited for 196.113008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628
	I0717 17:32:24.921084   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628
	I0717 17:32:24.921096   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:24.921107   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:24.921114   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:24.924239   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:25.121267   32725 request.go:629] Waited for 196.334079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:25.121368   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:25.121379   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:25.121389   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:25.121397   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:25.124591   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:25.125192   32725 pod_ready.go:92] pod "kube-controller-manager-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:25.125212   32725 pod_ready.go:81] duration metric: took 400.414336ms for pod "kube-controller-manager-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:25.125224   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:25.321146   32725 request.go:629] Waited for 195.85089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628-m02
	I0717 17:32:25.321231   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628-m02
	I0717 17:32:25.321241   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:25.321253   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:25.321261   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:25.324440   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:25.520408   32725 request.go:629] Waited for 195.280831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:25.520479   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:25.520485   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:25.520492   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:25.520496   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:25.523976   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:25.524695   32725 pod_ready.go:92] pod "kube-controller-manager-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:25.524716   32725 pod_ready.go:81] duration metric: took 399.480457ms for pod "kube-controller-manager-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:25.524727   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7lchn" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:25.720682   32725 request.go:629] Waited for 195.864209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lchn
	I0717 17:32:25.720761   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lchn
	I0717 17:32:25.720773   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:25.720784   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:25.720796   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:25.723983   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:25.921053   32725 request.go:629] Waited for 196.406095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:25.921137   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:25.921149   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:25.921158   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:25.921165   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:25.925851   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:32:25.926451   32725 pod_ready.go:92] pod "kube-proxy-7lchn" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:25.926473   32725 pod_ready.go:81] duration metric: took 401.739165ms for pod "kube-proxy-7lchn" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:25.926486   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fqf9q" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:26.120517   32725 request.go:629] Waited for 193.963518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fqf9q
	I0717 17:32:26.120594   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fqf9q
	I0717 17:32:26.120601   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:26.120614   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:26.120619   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:26.123941   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:26.320843   32725 request.go:629] Waited for 195.959286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:26.320896   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:26.320902   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:26.320913   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:26.320920   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:26.323988   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:26.324768   32725 pod_ready.go:92] pod "kube-proxy-fqf9q" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:26.324787   32725 pod_ready.go:81] duration metric: took 398.293955ms for pod "kube-proxy-fqf9q" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:26.324799   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:26.521285   32725 request.go:629] Waited for 196.402688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628
	I0717 17:32:26.521333   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628
	I0717 17:32:26.521337   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:26.521345   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:26.521348   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:26.524367   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:26.720227   32725 request.go:629] Waited for 195.278906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:26.720311   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:32:26.720318   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:26.720332   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:26.720338   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:26.723719   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:26.724241   32725 pod_ready.go:92] pod "kube-scheduler-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:26.724260   32725 pod_ready.go:81] duration metric: took 399.453568ms for pod "kube-scheduler-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:26.724272   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:26.920265   32725 request.go:629] Waited for 195.912827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628-m02
	I0717 17:32:26.920345   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628-m02
	I0717 17:32:26.920352   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:26.920362   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:26.920366   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:26.924161   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:27.120304   32725 request.go:629] Waited for 195.349145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:27.120370   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:32:27.120375   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:27.120383   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:27.120387   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:27.123310   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:32:27.123753   32725 pod_ready.go:92] pod "kube-scheduler-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:32:27.123768   32725 pod_ready.go:81] duration metric: took 399.488698ms for pod "kube-scheduler-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:32:27.123778   32725 pod_ready.go:38] duration metric: took 3.199783373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 17:32:27.123802   32725 api_server.go:52] waiting for apiserver process to appear ...
	I0717 17:32:27.123879   32725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:32:27.139407   32725 api_server.go:72] duration metric: took 20.485712083s to wait for apiserver process to appear ...
	I0717 17:32:27.139433   32725 api_server.go:88] waiting for apiserver healthz status ...
	I0717 17:32:27.139457   32725 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0717 17:32:27.143893   32725 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0717 17:32:27.143959   32725 round_trippers.go:463] GET https://192.168.39.100:8443/version
	I0717 17:32:27.143966   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:27.143974   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:27.143978   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:27.144741   32725 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 17:32:27.144832   32725 api_server.go:141] control plane version: v1.30.2
	I0717 17:32:27.144847   32725 api_server.go:131] duration metric: took 5.408081ms to wait for apiserver health ...
	I0717 17:32:27.144853   32725 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 17:32:27.321298   32725 request.go:629] Waited for 176.369505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:32:27.321363   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:32:27.321369   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:27.321376   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:27.321381   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:27.326151   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:32:27.330339   32725 system_pods.go:59] 17 kube-system pods found
	I0717 17:32:27.330383   32725 system_pods.go:61] "coredns-7db6d8ff4d-ljjl7" [2c4857a1-6ccd-4122-80b5-f5bcfd2e307f] Running
	I0717 17:32:27.330389   32725 system_pods.go:61] "coredns-7db6d8ff4d-nb567" [1739ac64-be05-4438-9a8f-a0d2821a1650] Running
	I0717 17:32:27.330393   32725 system_pods.go:61] "etcd-ha-174628" [005dbd48-14a2-458a-a8b3-252696a4ce85] Running
	I0717 17:32:27.330396   32725 system_pods.go:61] "etcd-ha-174628-m02" [6598f8f5-41df-46a9-bb82-fcf2ad182e60] Running
	I0717 17:32:27.330399   32725 system_pods.go:61] "kindnet-79txz" [8c09c315-591a-4835-a433-f3bc3283f305] Running
	I0717 17:32:27.330402   32725 system_pods.go:61] "kindnet-k6jnp" [9bca93ed-aca5-4540-990c-d9e6209d12d0] Running
	I0717 17:32:27.330405   32725 system_pods.go:61] "kube-apiserver-ha-174628" [3f169484-b9b1-4be6-abec-2309c0bfecba] Running
	I0717 17:32:27.330408   32725 system_pods.go:61] "kube-apiserver-ha-174628-m02" [316d349c-f099-45c3-a9ab-34fbcaeaae02] Running
	I0717 17:32:27.330410   32725 system_pods.go:61] "kube-controller-manager-ha-174628" [ea259b8d-9fcb-4fb1-9e32-75d6a47e44ed] Running
	I0717 17:32:27.330415   32725 system_pods.go:61] "kube-controller-manager-ha-174628-m02" [0374a405-7fb7-4367-997e-0ac06d57338d] Running
	I0717 17:32:27.330417   32725 system_pods.go:61] "kube-proxy-7lchn" [a01b695f-ec8b-4727-9c82-4251aa34d682] Running
	I0717 17:32:27.330421   32725 system_pods.go:61] "kube-proxy-fqf9q" [f74d57a9-38a2-464d-991f-fc8905fdbe3f] Running
	I0717 17:32:27.330424   32725 system_pods.go:61] "kube-scheduler-ha-174628" [1776b347-cc13-44da-a60a-199bdb85d2c2] Running
	I0717 17:32:27.330426   32725 system_pods.go:61] "kube-scheduler-ha-174628-m02" [ce3683eb-351e-40d4-a704-13dfddc2bdea] Running
	I0717 17:32:27.330429   32725 system_pods.go:61] "kube-vip-ha-174628" [b2d62768-e68e-4ce3-ad84-31ddac00688e] Running
	I0717 17:32:27.330431   32725 system_pods.go:61] "kube-vip-ha-174628-m02" [a6656a18-6176-4291-a094-e4b942e9ba1c] Running
	I0717 17:32:27.330434   32725 system_pods.go:61] "storage-provisioner" [8c0601bb-36f6-434d-8e9d-1e326bf682f5] Running
	I0717 17:32:27.330439   32725 system_pods.go:74] duration metric: took 185.581054ms to wait for pod list to return data ...
	I0717 17:32:27.330446   32725 default_sa.go:34] waiting for default service account to be created ...
	I0717 17:32:27.520831   32725 request.go:629] Waited for 190.319635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0717 17:32:27.520881   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0717 17:32:27.520891   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:27.520898   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:27.520903   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:27.524042   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:27.524241   32725 default_sa.go:45] found service account: "default"
	I0717 17:32:27.524258   32725 default_sa.go:55] duration metric: took 193.805436ms for default service account to be created ...
	I0717 17:32:27.524268   32725 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 17:32:27.720750   32725 request.go:629] Waited for 196.407045ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:32:27.720811   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:32:27.720816   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:27.720824   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:27.720828   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:27.726452   32725 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 17:32:27.730318   32725 system_pods.go:86] 17 kube-system pods found
	I0717 17:32:27.730340   32725 system_pods.go:89] "coredns-7db6d8ff4d-ljjl7" [2c4857a1-6ccd-4122-80b5-f5bcfd2e307f] Running
	I0717 17:32:27.730346   32725 system_pods.go:89] "coredns-7db6d8ff4d-nb567" [1739ac64-be05-4438-9a8f-a0d2821a1650] Running
	I0717 17:32:27.730350   32725 system_pods.go:89] "etcd-ha-174628" [005dbd48-14a2-458a-a8b3-252696a4ce85] Running
	I0717 17:32:27.730354   32725 system_pods.go:89] "etcd-ha-174628-m02" [6598f8f5-41df-46a9-bb82-fcf2ad182e60] Running
	I0717 17:32:27.730358   32725 system_pods.go:89] "kindnet-79txz" [8c09c315-591a-4835-a433-f3bc3283f305] Running
	I0717 17:32:27.730362   32725 system_pods.go:89] "kindnet-k6jnp" [9bca93ed-aca5-4540-990c-d9e6209d12d0] Running
	I0717 17:32:27.730366   32725 system_pods.go:89] "kube-apiserver-ha-174628" [3f169484-b9b1-4be6-abec-2309c0bfecba] Running
	I0717 17:32:27.730369   32725 system_pods.go:89] "kube-apiserver-ha-174628-m02" [316d349c-f099-45c3-a9ab-34fbcaeaae02] Running
	I0717 17:32:27.730373   32725 system_pods.go:89] "kube-controller-manager-ha-174628" [ea259b8d-9fcb-4fb1-9e32-75d6a47e44ed] Running
	I0717 17:32:27.730377   32725 system_pods.go:89] "kube-controller-manager-ha-174628-m02" [0374a405-7fb7-4367-997e-0ac06d57338d] Running
	I0717 17:32:27.730381   32725 system_pods.go:89] "kube-proxy-7lchn" [a01b695f-ec8b-4727-9c82-4251aa34d682] Running
	I0717 17:32:27.730384   32725 system_pods.go:89] "kube-proxy-fqf9q" [f74d57a9-38a2-464d-991f-fc8905fdbe3f] Running
	I0717 17:32:27.730388   32725 system_pods.go:89] "kube-scheduler-ha-174628" [1776b347-cc13-44da-a60a-199bdb85d2c2] Running
	I0717 17:32:27.730392   32725 system_pods.go:89] "kube-scheduler-ha-174628-m02" [ce3683eb-351e-40d4-a704-13dfddc2bdea] Running
	I0717 17:32:27.730396   32725 system_pods.go:89] "kube-vip-ha-174628" [b2d62768-e68e-4ce3-ad84-31ddac00688e] Running
	I0717 17:32:27.730399   32725 system_pods.go:89] "kube-vip-ha-174628-m02" [a6656a18-6176-4291-a094-e4b942e9ba1c] Running
	I0717 17:32:27.730402   32725 system_pods.go:89] "storage-provisioner" [8c0601bb-36f6-434d-8e9d-1e326bf682f5] Running
	I0717 17:32:27.730408   32725 system_pods.go:126] duration metric: took 206.135707ms to wait for k8s-apps to be running ...
	I0717 17:32:27.730418   32725 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 17:32:27.730461   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:32:27.745769   32725 system_svc.go:56] duration metric: took 15.343153ms WaitForService to wait for kubelet
	I0717 17:32:27.745797   32725 kubeadm.go:582] duration metric: took 21.092108876s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 17:32:27.745825   32725 node_conditions.go:102] verifying NodePressure condition ...
	I0717 17:32:27.921231   32725 request.go:629] Waited for 175.344959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes
	I0717 17:32:27.921292   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes
	I0717 17:32:27.921298   32725 round_trippers.go:469] Request Headers:
	I0717 17:32:27.921305   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:32:27.921311   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:32:27.924530   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:32:27.925278   32725 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 17:32:27.925314   32725 node_conditions.go:123] node cpu capacity is 2
	I0717 17:32:27.925333   32725 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 17:32:27.925338   32725 node_conditions.go:123] node cpu capacity is 2
	I0717 17:32:27.925345   32725 node_conditions.go:105] duration metric: took 179.514948ms to run NodePressure ...
	I0717 17:32:27.925360   32725 start.go:241] waiting for startup goroutines ...
	I0717 17:32:27.925384   32725 start.go:255] writing updated cluster config ...
	I0717 17:32:27.927397   32725 out.go:177] 
	I0717 17:32:27.929043   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:32:27.929126   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:32:27.930795   32725 out.go:177] * Starting "ha-174628-m03" control-plane node in "ha-174628" cluster
	I0717 17:32:27.931898   32725 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:32:27.931916   32725 cache.go:56] Caching tarball of preloaded images
	I0717 17:32:27.932005   32725 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 17:32:27.932018   32725 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 17:32:27.932087   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:32:27.932239   32725 start.go:360] acquireMachinesLock for ha-174628-m03: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 17:32:27.932277   32725 start.go:364] duration metric: took 18.412µs to acquireMachinesLock for "ha-174628-m03"
	I0717 17:32:27.932298   32725 start.go:93] Provisioning new machine with config: &{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:32:27.932401   32725 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0717 17:32:27.933866   32725 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 17:32:27.933951   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:32:27.933983   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:32:27.949065   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37567
	I0717 17:32:27.949537   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:32:27.950074   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:32:27.950098   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:32:27.950419   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:32:27.950581   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetMachineName
	I0717 17:32:27.950730   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:27.950865   32725 start.go:159] libmachine.API.Create for "ha-174628" (driver="kvm2")
	I0717 17:32:27.950893   32725 client.go:168] LocalClient.Create starting
	I0717 17:32:27.950939   32725 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 17:32:27.950976   32725 main.go:141] libmachine: Decoding PEM data...
	I0717 17:32:27.950996   32725 main.go:141] libmachine: Parsing certificate...
	I0717 17:32:27.951057   32725 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 17:32:27.951083   32725 main.go:141] libmachine: Decoding PEM data...
	I0717 17:32:27.951099   32725 main.go:141] libmachine: Parsing certificate...
	I0717 17:32:27.951131   32725 main.go:141] libmachine: Running pre-create checks...
	I0717 17:32:27.951146   32725 main.go:141] libmachine: (ha-174628-m03) Calling .PreCreateCheck
	I0717 17:32:27.951311   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetConfigRaw
	I0717 17:32:27.951698   32725 main.go:141] libmachine: Creating machine...
	I0717 17:32:27.951713   32725 main.go:141] libmachine: (ha-174628-m03) Calling .Create
	I0717 17:32:27.951881   32725 main.go:141] libmachine: (ha-174628-m03) Creating KVM machine...
	I0717 17:32:27.953177   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found existing default KVM network
	I0717 17:32:27.953293   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found existing private KVM network mk-ha-174628
	I0717 17:32:27.953451   32725 main.go:141] libmachine: (ha-174628-m03) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03 ...
	I0717 17:32:27.953475   32725 main.go:141] libmachine: (ha-174628-m03) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 17:32:27.953531   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:27.953420   33749 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:32:27.953661   32725 main.go:141] libmachine: (ha-174628-m03) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 17:32:28.170503   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:28.170356   33749 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa...
	I0717 17:32:28.227484   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:28.227377   33749 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/ha-174628-m03.rawdisk...
	I0717 17:32:28.227511   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Writing magic tar header
	I0717 17:32:28.227520   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Writing SSH key tar header
	I0717 17:32:28.227528   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:28.227496   33749 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03 ...
	I0717 17:32:28.227683   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03
	I0717 17:32:28.227715   32725 main.go:141] libmachine: (ha-174628-m03) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03 (perms=drwx------)
	I0717 17:32:28.227727   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 17:32:28.227740   32725 main.go:141] libmachine: (ha-174628-m03) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 17:32:28.227756   32725 main.go:141] libmachine: (ha-174628-m03) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 17:32:28.227767   32725 main.go:141] libmachine: (ha-174628-m03) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 17:32:28.227780   32725 main.go:141] libmachine: (ha-174628-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 17:32:28.227792   32725 main.go:141] libmachine: (ha-174628-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 17:32:28.227809   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:32:28.227820   32725 main.go:141] libmachine: (ha-174628-m03) Creating domain...
	I0717 17:32:28.227838   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 17:32:28.227850   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 17:32:28.227865   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home/jenkins
	I0717 17:32:28.227874   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Checking permissions on dir: /home
	I0717 17:32:28.227883   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Skipping /home - not owner
	I0717 17:32:28.228822   32725 main.go:141] libmachine: (ha-174628-m03) define libvirt domain using xml: 
	I0717 17:32:28.228840   32725 main.go:141] libmachine: (ha-174628-m03) <domain type='kvm'>
	I0717 17:32:28.228851   32725 main.go:141] libmachine: (ha-174628-m03)   <name>ha-174628-m03</name>
	I0717 17:32:28.228864   32725 main.go:141] libmachine: (ha-174628-m03)   <memory unit='MiB'>2200</memory>
	I0717 17:32:28.228877   32725 main.go:141] libmachine: (ha-174628-m03)   <vcpu>2</vcpu>
	I0717 17:32:28.228890   32725 main.go:141] libmachine: (ha-174628-m03)   <features>
	I0717 17:32:28.228898   32725 main.go:141] libmachine: (ha-174628-m03)     <acpi/>
	I0717 17:32:28.228907   32725 main.go:141] libmachine: (ha-174628-m03)     <apic/>
	I0717 17:32:28.228918   32725 main.go:141] libmachine: (ha-174628-m03)     <pae/>
	I0717 17:32:28.228926   32725 main.go:141] libmachine: (ha-174628-m03)     
	I0717 17:32:28.228937   32725 main.go:141] libmachine: (ha-174628-m03)   </features>
	I0717 17:32:28.228961   32725 main.go:141] libmachine: (ha-174628-m03)   <cpu mode='host-passthrough'>
	I0717 17:32:28.228974   32725 main.go:141] libmachine: (ha-174628-m03)   
	I0717 17:32:28.228985   32725 main.go:141] libmachine: (ha-174628-m03)   </cpu>
	I0717 17:32:28.228996   32725 main.go:141] libmachine: (ha-174628-m03)   <os>
	I0717 17:32:28.229007   32725 main.go:141] libmachine: (ha-174628-m03)     <type>hvm</type>
	I0717 17:32:28.229018   32725 main.go:141] libmachine: (ha-174628-m03)     <boot dev='cdrom'/>
	I0717 17:32:28.229036   32725 main.go:141] libmachine: (ha-174628-m03)     <boot dev='hd'/>
	I0717 17:32:28.229048   32725 main.go:141] libmachine: (ha-174628-m03)     <bootmenu enable='no'/>
	I0717 17:32:28.229066   32725 main.go:141] libmachine: (ha-174628-m03)   </os>
	I0717 17:32:28.229075   32725 main.go:141] libmachine: (ha-174628-m03)   <devices>
	I0717 17:32:28.229082   32725 main.go:141] libmachine: (ha-174628-m03)     <disk type='file' device='cdrom'>
	I0717 17:32:28.229097   32725 main.go:141] libmachine: (ha-174628-m03)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/boot2docker.iso'/>
	I0717 17:32:28.229108   32725 main.go:141] libmachine: (ha-174628-m03)       <target dev='hdc' bus='scsi'/>
	I0717 17:32:28.229119   32725 main.go:141] libmachine: (ha-174628-m03)       <readonly/>
	I0717 17:32:28.229127   32725 main.go:141] libmachine: (ha-174628-m03)     </disk>
	I0717 17:32:28.229139   32725 main.go:141] libmachine: (ha-174628-m03)     <disk type='file' device='disk'>
	I0717 17:32:28.229150   32725 main.go:141] libmachine: (ha-174628-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 17:32:28.229187   32725 main.go:141] libmachine: (ha-174628-m03)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/ha-174628-m03.rawdisk'/>
	I0717 17:32:28.229207   32725 main.go:141] libmachine: (ha-174628-m03)       <target dev='hda' bus='virtio'/>
	I0717 17:32:28.229221   32725 main.go:141] libmachine: (ha-174628-m03)     </disk>
	I0717 17:32:28.229232   32725 main.go:141] libmachine: (ha-174628-m03)     <interface type='network'>
	I0717 17:32:28.229244   32725 main.go:141] libmachine: (ha-174628-m03)       <source network='mk-ha-174628'/>
	I0717 17:32:28.229258   32725 main.go:141] libmachine: (ha-174628-m03)       <model type='virtio'/>
	I0717 17:32:28.229277   32725 main.go:141] libmachine: (ha-174628-m03)     </interface>
	I0717 17:32:28.229290   32725 main.go:141] libmachine: (ha-174628-m03)     <interface type='network'>
	I0717 17:32:28.229302   32725 main.go:141] libmachine: (ha-174628-m03)       <source network='default'/>
	I0717 17:32:28.229310   32725 main.go:141] libmachine: (ha-174628-m03)       <model type='virtio'/>
	I0717 17:32:28.229322   32725 main.go:141] libmachine: (ha-174628-m03)     </interface>
	I0717 17:32:28.229332   32725 main.go:141] libmachine: (ha-174628-m03)     <serial type='pty'>
	I0717 17:32:28.229341   32725 main.go:141] libmachine: (ha-174628-m03)       <target port='0'/>
	I0717 17:32:28.229350   32725 main.go:141] libmachine: (ha-174628-m03)     </serial>
	I0717 17:32:28.229379   32725 main.go:141] libmachine: (ha-174628-m03)     <console type='pty'>
	I0717 17:32:28.229399   32725 main.go:141] libmachine: (ha-174628-m03)       <target type='serial' port='0'/>
	I0717 17:32:28.229414   32725 main.go:141] libmachine: (ha-174628-m03)     </console>
	I0717 17:32:28.229425   32725 main.go:141] libmachine: (ha-174628-m03)     <rng model='virtio'>
	I0717 17:32:28.229439   32725 main.go:141] libmachine: (ha-174628-m03)       <backend model='random'>/dev/random</backend>
	I0717 17:32:28.229454   32725 main.go:141] libmachine: (ha-174628-m03)     </rng>
	I0717 17:32:28.229464   32725 main.go:141] libmachine: (ha-174628-m03)     
	I0717 17:32:28.229475   32725 main.go:141] libmachine: (ha-174628-m03)     
	I0717 17:32:28.229487   32725 main.go:141] libmachine: (ha-174628-m03)   </devices>
	I0717 17:32:28.229496   32725 main.go:141] libmachine: (ha-174628-m03) </domain>
	I0717 17:32:28.229506   32725 main.go:141] libmachine: (ha-174628-m03) 
	I0717 17:32:28.236645   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:87:41:a4 in network default
	I0717 17:32:28.237177   32725 main.go:141] libmachine: (ha-174628-m03) Ensuring networks are active...
	I0717 17:32:28.237192   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:28.237810   32725 main.go:141] libmachine: (ha-174628-m03) Ensuring network default is active
	I0717 17:32:28.238073   32725 main.go:141] libmachine: (ha-174628-m03) Ensuring network mk-ha-174628 is active
	I0717 17:32:28.238357   32725 main.go:141] libmachine: (ha-174628-m03) Getting domain xml...
	I0717 17:32:28.239064   32725 main.go:141] libmachine: (ha-174628-m03) Creating domain...
	I0717 17:32:29.458219   32725 main.go:141] libmachine: (ha-174628-m03) Waiting to get IP...
	I0717 17:32:29.459153   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:29.459623   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:29.459644   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:29.459603   33749 retry.go:31] will retry after 192.524869ms: waiting for machine to come up
	I0717 17:32:29.654067   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:29.654552   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:29.654576   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:29.654509   33749 retry.go:31] will retry after 255.817162ms: waiting for machine to come up
	I0717 17:32:29.911892   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:29.912304   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:29.912331   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:29.912265   33749 retry.go:31] will retry after 303.807574ms: waiting for machine to come up
	I0717 17:32:30.217818   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:30.218235   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:30.218256   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:30.218193   33749 retry.go:31] will retry after 370.345102ms: waiting for machine to come up
	I0717 17:32:30.589636   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:30.590142   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:30.590172   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:30.590090   33749 retry.go:31] will retry after 634.938743ms: waiting for machine to come up
	I0717 17:32:31.226831   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:31.227384   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:31.227421   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:31.227366   33749 retry.go:31] will retry after 656.775829ms: waiting for machine to come up
	I0717 17:32:31.886438   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:31.886791   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:31.886821   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:31.886749   33749 retry.go:31] will retry after 817.914558ms: waiting for machine to come up
	I0717 17:32:32.705616   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:32.705977   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:32.706002   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:32.705934   33749 retry.go:31] will retry after 1.159163832s: waiting for machine to come up
	I0717 17:32:33.867104   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:33.867567   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:33.867593   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:33.867530   33749 retry.go:31] will retry after 1.236671526s: waiting for machine to come up
	I0717 17:32:35.105805   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:35.106230   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:35.106253   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:35.106187   33749 retry.go:31] will retry after 2.082191353s: waiting for machine to come up
	I0717 17:32:37.190467   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:37.190882   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:37.190907   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:37.190844   33749 retry.go:31] will retry after 2.239846165s: waiting for machine to come up
	I0717 17:32:39.431818   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:39.432388   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:39.432409   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:39.432355   33749 retry.go:31] will retry after 2.202455513s: waiting for machine to come up
	I0717 17:32:41.636343   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:41.636755   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:41.636778   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:41.636719   33749 retry.go:31] will retry after 4.069466996s: waiting for machine to come up
	I0717 17:32:45.707317   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:45.707823   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find current IP address of domain ha-174628-m03 in network mk-ha-174628
	I0717 17:32:45.707864   32725 main.go:141] libmachine: (ha-174628-m03) DBG | I0717 17:32:45.707796   33749 retry.go:31] will retry after 4.852459037s: waiting for machine to come up
	I0717 17:32:50.562133   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.562635   32725 main.go:141] libmachine: (ha-174628-m03) Found IP for machine: 192.168.39.187
	I0717 17:32:50.562667   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has current primary IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.562676   32725 main.go:141] libmachine: (ha-174628-m03) Reserving static IP address...
	I0717 17:32:50.563216   32725 main.go:141] libmachine: (ha-174628-m03) DBG | unable to find host DHCP lease matching {name: "ha-174628-m03", mac: "52:54:00:4c:e1:a8", ip: "192.168.39.187"} in network mk-ha-174628
	I0717 17:32:50.638208   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Getting to WaitForSSH function...
	I0717 17:32:50.638240   32725 main.go:141] libmachine: (ha-174628-m03) Reserved static IP address: 192.168.39.187
	I0717 17:32:50.638254   32725 main.go:141] libmachine: (ha-174628-m03) Waiting for SSH to be available...
	I0717 17:32:50.641124   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.641703   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:50.641733   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.641896   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Using SSH client type: external
	I0717 17:32:50.641922   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa (-rw-------)
	I0717 17:32:50.641997   32725 main.go:141] libmachine: (ha-174628-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 17:32:50.642021   32725 main.go:141] libmachine: (ha-174628-m03) DBG | About to run SSH command:
	I0717 17:32:50.642034   32725 main.go:141] libmachine: (ha-174628-m03) DBG | exit 0
	I0717 17:32:50.769047   32725 main.go:141] libmachine: (ha-174628-m03) DBG | SSH cmd err, output: <nil>: 
	I0717 17:32:50.769300   32725 main.go:141] libmachine: (ha-174628-m03) KVM machine creation complete!
	I0717 17:32:50.769649   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetConfigRaw
	I0717 17:32:50.770180   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:50.770431   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:50.770598   32725 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 17:32:50.770611   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetState
	I0717 17:32:50.771822   32725 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 17:32:50.771847   32725 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 17:32:50.771856   32725 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 17:32:50.771866   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:50.774382   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.774707   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:50.774736   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.774863   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:50.775019   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:50.775181   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:50.775317   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:50.775468   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:32:50.775713   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0717 17:32:50.775728   32725 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 17:32:50.880086   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:32:50.880115   32725 main.go:141] libmachine: Detecting the provisioner...
	I0717 17:32:50.880123   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:50.882869   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.883361   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:50.883389   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.883603   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:50.883835   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:50.883977   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:50.884100   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:50.884229   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:32:50.884441   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0717 17:32:50.884457   32725 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 17:32:50.989607   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 17:32:50.989684   32725 main.go:141] libmachine: found compatible host: buildroot
	I0717 17:32:50.989693   32725 main.go:141] libmachine: Provisioning with buildroot...
	I0717 17:32:50.989703   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetMachineName
	I0717 17:32:50.989952   32725 buildroot.go:166] provisioning hostname "ha-174628-m03"
	I0717 17:32:50.989981   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetMachineName
	I0717 17:32:50.990157   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:50.993246   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.993618   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:50.993648   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:50.993822   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:50.994010   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:50.994163   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:50.994383   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:50.994563   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:32:50.994754   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0717 17:32:50.994771   32725 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174628-m03 && echo "ha-174628-m03" | sudo tee /etc/hostname
	I0717 17:32:51.115100   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174628-m03
	
	I0717 17:32:51.115133   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:51.117990   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.118349   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.118388   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.118556   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:51.118733   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.118920   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.119058   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:51.119241   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:32:51.119457   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0717 17:32:51.119473   32725 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174628-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174628-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174628-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 17:32:51.234384   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:32:51.234422   32725 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 17:32:51.234441   32725 buildroot.go:174] setting up certificates
	I0717 17:32:51.234451   32725 provision.go:84] configureAuth start
	I0717 17:32:51.234459   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetMachineName
	I0717 17:32:51.234726   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:32:51.237783   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.238222   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.238249   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.238482   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:51.240655   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.241000   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.241025   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.241218   32725 provision.go:143] copyHostCerts
	I0717 17:32:51.241243   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:32:51.241274   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 17:32:51.241283   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:32:51.241355   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 17:32:51.241432   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:32:51.241449   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 17:32:51.241456   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:32:51.241481   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 17:32:51.241526   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:32:51.241544   32725 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 17:32:51.241550   32725 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:32:51.241581   32725 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 17:32:51.241632   32725 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.ha-174628-m03 san=[127.0.0.1 192.168.39.187 ha-174628-m03 localhost minikube]
	I0717 17:32:51.438209   32725 provision.go:177] copyRemoteCerts
	I0717 17:32:51.438266   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 17:32:51.438288   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:51.441235   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.441643   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.441670   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.441882   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:51.442123   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.442254   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:51.442473   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:32:51.523065   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 17:32:51.523145   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 17:32:51.546335   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 17:32:51.546401   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 17:32:51.570411   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 17:32:51.570499   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 17:32:51.595232   32725 provision.go:87] duration metric: took 360.767696ms to configureAuth
	I0717 17:32:51.595256   32725 buildroot.go:189] setting minikube options for container-runtime
	I0717 17:32:51.595510   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:32:51.595615   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:51.598390   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.598850   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.598879   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.599104   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:51.599295   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.599473   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.599621   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:51.599834   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:32:51.600027   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0717 17:32:51.600049   32725 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 17:32:51.860783   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 17:32:51.860809   32725 main.go:141] libmachine: Checking connection to Docker...
	I0717 17:32:51.860822   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetURL
	I0717 17:32:51.862219   32725 main.go:141] libmachine: (ha-174628-m03) DBG | Using libvirt version 6000000
	I0717 17:32:51.864575   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.864983   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.865012   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.865175   32725 main.go:141] libmachine: Docker is up and running!
	I0717 17:32:51.865191   32725 main.go:141] libmachine: Reticulating splines...
	I0717 17:32:51.865200   32725 client.go:171] duration metric: took 23.91429607s to LocalClient.Create
	I0717 17:32:51.865228   32725 start.go:167] duration metric: took 23.914361787s to libmachine.API.Create "ha-174628"
	I0717 17:32:51.865246   32725 start.go:293] postStartSetup for "ha-174628-m03" (driver="kvm2")
	I0717 17:32:51.865267   32725 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 17:32:51.865292   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:51.865542   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 17:32:51.865579   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:51.867591   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.867877   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.867897   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.868048   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:51.868205   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.868334   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:51.868477   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:32:51.951472   32725 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 17:32:51.955703   32725 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 17:32:51.955728   32725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 17:32:51.955785   32725 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 17:32:51.955863   32725 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 17:32:51.955875   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /etc/ssl/certs/215772.pem
	I0717 17:32:51.955978   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 17:32:51.966128   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:32:51.988282   32725 start.go:296] duration metric: took 123.019698ms for postStartSetup
	I0717 17:32:51.988339   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetConfigRaw
	I0717 17:32:51.988868   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:32:51.991627   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.992133   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.992196   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.992428   32725 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:32:51.992611   32725 start.go:128] duration metric: took 24.060201383s to createHost
	I0717 17:32:51.992643   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:51.994793   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.995153   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:51.995189   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:51.995322   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:51.995518   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.995650   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:51.995784   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:51.995931   32725 main.go:141] libmachine: Using SSH client type: native
	I0717 17:32:51.996120   32725 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0717 17:32:51.996141   32725 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 17:32:52.101672   32725 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721237572.079082425
	
	I0717 17:32:52.101699   32725 fix.go:216] guest clock: 1721237572.079082425
	I0717 17:32:52.101709   32725 fix.go:229] Guest: 2024-07-17 17:32:52.079082425 +0000 UTC Remote: 2024-07-17 17:32:51.992633283 +0000 UTC m=+215.699559180 (delta=86.449142ms)
	I0717 17:32:52.101735   32725 fix.go:200] guest clock delta is within tolerance: 86.449142ms
	I0717 17:32:52.101750   32725 start.go:83] releasing machines lock for "ha-174628-m03", held for 24.169461849s
	I0717 17:32:52.101778   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:52.102081   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:32:52.104685   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:52.105074   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:52.105103   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:52.107459   32725 out.go:177] * Found network options:
	I0717 17:32:52.108860   32725 out.go:177]   - NO_PROXY=192.168.39.100,192.168.39.97
	W0717 17:32:52.110135   32725 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 17:32:52.110161   32725 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 17:32:52.110174   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:52.110746   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:52.110932   32725 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:32:52.111044   32725 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 17:32:52.111079   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	W0717 17:32:52.111158   32725 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 17:32:52.111182   32725 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 17:32:52.111257   32725 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 17:32:52.111277   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:32:52.115229   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:52.115352   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:52.115724   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:52.115748   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:52.115773   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:52.115792   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:52.115904   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:52.116017   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:32:52.116114   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:52.116207   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:32:52.116275   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:52.116409   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:32:52.116425   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:32:52.116592   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:32:52.351926   32725 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 17:32:52.357460   32725 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 17:32:52.357535   32725 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 17:32:52.372594   32725 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 17:32:52.372613   32725 start.go:495] detecting cgroup driver to use...
	I0717 17:32:52.372669   32725 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 17:32:52.387789   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 17:32:52.401328   32725 docker.go:217] disabling cri-docker service (if available) ...
	I0717 17:32:52.401390   32725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 17:32:52.415399   32725 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 17:32:52.428310   32725 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 17:32:52.547805   32725 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 17:32:52.686824   32725 docker.go:233] disabling docker service ...
	I0717 17:32:52.686894   32725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 17:32:52.701619   32725 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 17:32:52.714722   32725 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 17:32:52.857434   32725 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 17:32:52.974350   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 17:32:52.988214   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 17:32:53.006069   32725 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 17:32:53.006132   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.017180   32725 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 17:32:53.017255   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.027942   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.037867   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.047784   32725 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 17:32:53.057458   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.067514   32725 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.082777   32725 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:32:53.092567   32725 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 17:32:53.101279   32725 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 17:32:53.101334   32725 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 17:32:53.112489   32725 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 17:32:53.120888   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:32:53.232964   32725 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 17:32:53.371442   32725 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 17:32:53.371507   32725 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 17:32:53.375863   32725 start.go:563] Will wait 60s for crictl version
	I0717 17:32:53.375922   32725 ssh_runner.go:195] Run: which crictl
	I0717 17:32:53.379325   32725 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 17:32:53.417781   32725 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 17:32:53.417871   32725 ssh_runner.go:195] Run: crio --version
	I0717 17:32:53.448362   32725 ssh_runner.go:195] Run: crio --version
	I0717 17:32:53.479015   32725 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 17:32:53.480665   32725 out.go:177]   - env NO_PROXY=192.168.39.100
	I0717 17:32:53.482211   32725 out.go:177]   - env NO_PROXY=192.168.39.100,192.168.39.97
	I0717 17:32:53.483754   32725 main.go:141] libmachine: (ha-174628-m03) Calling .GetIP
	I0717 17:32:53.487140   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:53.487538   32725 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:32:53.487569   32725 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:32:53.487773   32725 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 17:32:53.492188   32725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:32:53.504283   32725 mustload.go:65] Loading cluster: ha-174628
	I0717 17:32:53.504522   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:32:53.504863   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:32:53.504912   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:32:53.520585   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40729
	I0717 17:32:53.520979   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:32:53.521520   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:32:53.521555   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:32:53.521870   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:32:53.522067   32725 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:32:53.523866   32725 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:32:53.524289   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:32:53.524343   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:32:53.539209   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39055
	I0717 17:32:53.539638   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:32:53.540040   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:32:53.540057   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:32:53.540399   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:32:53.540604   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:32:53.540776   32725 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628 for IP: 192.168.39.187
	I0717 17:32:53.540909   32725 certs.go:194] generating shared ca certs ...
	I0717 17:32:53.540967   32725 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:32:53.541136   32725 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 17:32:53.541189   32725 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 17:32:53.541203   32725 certs.go:256] generating profile certs ...
	I0717 17:32:53.541342   32725 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key
	I0717 17:32:53.541373   32725 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.256de965
	I0717 17:32:53.541395   32725 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.256de965 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.97 192.168.39.187 192.168.39.254]
	I0717 17:32:53.654771   32725 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.256de965 ...
	I0717 17:32:53.654802   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.256de965: {Name:mka9a94d0ef93b6feff80505c13cb6cb0977edc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:32:53.654988   32725 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.256de965 ...
	I0717 17:32:53.655002   32725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.256de965: {Name:mk097b9771d7a02dd6c417fdd0556e1661a3afd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:32:53.655078   32725 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.256de965 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt
	I0717 17:32:53.655247   32725 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.256de965 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key
	I0717 17:32:53.655385   32725 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key
	I0717 17:32:53.655401   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 17:32:53.655417   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 17:32:53.655432   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 17:32:53.655446   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 17:32:53.655461   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 17:32:53.655474   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 17:32:53.655485   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 17:32:53.655500   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 17:32:53.655559   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 17:32:53.655591   32725 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 17:32:53.655602   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 17:32:53.655627   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 17:32:53.655653   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 17:32:53.655676   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 17:32:53.655719   32725 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:32:53.655750   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /usr/share/ca-certificates/215772.pem
	I0717 17:32:53.655765   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:32:53.655780   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem -> /usr/share/ca-certificates/21577.pem
	I0717 17:32:53.655812   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:32:53.658925   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:32:53.659339   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:32:53.659367   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:32:53.659536   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:32:53.659736   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:32:53.659892   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:32:53.660020   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:32:53.729334   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0717 17:32:53.734050   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0717 17:32:53.751730   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0717 17:32:53.756178   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0717 17:32:53.766181   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0717 17:32:53.770225   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0717 17:32:53.780232   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0717 17:32:53.784036   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0717 17:32:53.793503   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0717 17:32:53.797105   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0717 17:32:53.807668   32725 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0717 17:32:53.811652   32725 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0717 17:32:53.822961   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 17:32:53.848987   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 17:32:53.872822   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 17:32:53.895475   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 17:32:53.919273   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0717 17:32:53.943792   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 17:32:53.968284   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 17:32:53.992314   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 17:32:54.016776   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 17:32:54.041042   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 17:32:54.065355   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 17:32:54.087492   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0717 17:32:54.103125   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0717 17:32:54.118399   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0717 17:32:54.132971   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0717 17:32:54.149744   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0717 17:32:54.164764   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0717 17:32:54.180784   32725 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0717 17:32:54.196104   32725 ssh_runner.go:195] Run: openssl version
	I0717 17:32:54.201662   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 17:32:54.211240   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:32:54.215270   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:32:54.215319   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:32:54.220442   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 17:32:54.229827   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 17:32:54.239667   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 17:32:54.243710   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 17:32:54.243773   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 17:32:54.249216   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 17:32:54.259473   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 17:32:54.269225   32725 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 17:32:54.273248   32725 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 17:32:54.273299   32725 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 17:32:54.278432   32725 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 17:32:54.287892   32725 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 17:32:54.291981   32725 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 17:32:54.292036   32725 kubeadm.go:934] updating node {m03 192.168.39.187 8443 v1.30.2 crio true true} ...
	I0717 17:32:54.292146   32725 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174628-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 17:32:54.292178   32725 kube-vip.go:115] generating kube-vip config ...
	I0717 17:32:54.292207   32725 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 17:32:54.306918   32725 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 17:32:54.306996   32725 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 17:32:54.307065   32725 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 17:32:54.317023   32725 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.2': No such file or directory
	
	Initiating transfer...
	I0717 17:32:54.317082   32725 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.2
	I0717 17:32:54.326591   32725 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubelet.sha256
	I0717 17:32:54.326637   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:32:54.326593   32725 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
	I0717 17:32:54.326588   32725 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubeadm.sha256
	I0717 17:32:54.326705   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubectl -> /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 17:32:54.326720   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubeadm -> /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 17:32:54.326777   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm
	I0717 17:32:54.326782   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl
	I0717 17:32:54.343547   32725 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubelet -> /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 17:32:54.343569   32725 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubectl': No such file or directory
	I0717 17:32:54.343589   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubectl --> /var/lib/minikube/binaries/v1.30.2/kubectl (51454104 bytes)
	I0717 17:32:54.343649   32725 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet
	I0717 17:32:54.343708   32725 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubeadm': No such file or directory
	I0717 17:32:54.343739   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubeadm --> /var/lib/minikube/binaries/v1.30.2/kubeadm (50249880 bytes)
	I0717 17:32:54.382827   32725 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.2/kubelet': No such file or directory
	I0717 17:32:54.382860   32725 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.30.2/kubelet --> /var/lib/minikube/binaries/v1.30.2/kubelet (100124920 bytes)
	I0717 17:32:55.129761   32725 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0717 17:32:55.139472   32725 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0717 17:32:55.156008   32725 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 17:32:55.174308   32725 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 17:32:55.191174   32725 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 17:32:55.194996   32725 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 17:32:55.207583   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:32:55.328277   32725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:32:55.352521   32725 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:32:55.353063   32725 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:32:55.353118   32725 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:32:55.368193   32725 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37793
	I0717 17:32:55.368665   32725 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:32:55.369265   32725 main.go:141] libmachine: Using API Version  1
	I0717 17:32:55.369292   32725 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:32:55.369624   32725 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:32:55.369864   32725 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:32:55.370050   32725 start.go:317] joinCluster: &{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cluster
Name:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:32:55.370201   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 17:32:55.370220   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:32:55.373242   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:32:55.373707   32725 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:32:55.373732   32725 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:32:55.373923   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:32:55.374110   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:32:55.374299   32725 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:32:55.374447   32725 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:32:55.536354   32725 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:32:55.536405   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vk8qsb.12drdxxthtwm1ogt --discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174628-m03 --control-plane --apiserver-advertise-address=192.168.39.187 --apiserver-bind-port=8443"
	I0717 17:33:19.274771   32725 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vk8qsb.12drdxxthtwm1ogt --discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-174628-m03 --control-plane --apiserver-advertise-address=192.168.39.187 --apiserver-bind-port=8443": (23.738336094s)
	I0717 17:33:19.274813   32725 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 17:33:19.815424   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-174628-m03 minikube.k8s.io/updated_at=2024_07_17T17_33_19_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=ha-174628 minikube.k8s.io/primary=false
	I0717 17:33:19.917568   32725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-174628-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0717 17:33:20.036541   32725 start.go:319] duration metric: took 24.666487067s to joinCluster
	I0717 17:33:20.036649   32725 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 17:33:20.036991   32725 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:33:20.037950   32725 out.go:177] * Verifying Kubernetes components...
	I0717 17:33:20.038778   32725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:33:20.268248   32725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:33:20.287736   32725 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:33:20.287994   32725 kapi.go:59] client config for ha-174628: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.crt", KeyFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key", CAFile:"/home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0717 17:33:20.288072   32725 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.100:8443
	I0717 17:33:20.288315   32725 node_ready.go:35] waiting up to 6m0s for node "ha-174628-m03" to be "Ready" ...
	I0717 17:33:20.288406   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:20.288415   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:20.288422   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:20.288426   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:20.292742   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:33:20.789485   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:20.789515   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:20.789529   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:20.789535   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:20.793251   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:21.289180   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:21.289200   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:21.289208   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:21.289213   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:21.292997   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:21.789506   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:21.789526   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:21.789537   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:21.789542   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:21.793304   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:22.288588   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:22.288614   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:22.288622   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:22.288626   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:22.291561   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:22.292125   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:22.788488   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:22.788510   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:22.788521   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:22.788530   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:22.791355   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:23.288918   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:23.288960   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:23.288973   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:23.288977   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:23.298962   32725 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 17:33:23.788564   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:23.788587   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:23.788596   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:23.788603   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:23.792071   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:24.288548   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:24.288576   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:24.288587   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:24.288592   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:24.292616   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:33:24.293180   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:24.788719   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:24.788739   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:24.788747   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:24.788750   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:24.791624   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:25.288577   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:25.288597   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:25.288605   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:25.288616   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:25.291759   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:25.789482   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:25.789510   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:25.789521   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:25.789525   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:25.793388   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:26.289346   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:26.289368   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:26.289376   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:26.289380   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:26.292671   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:26.293560   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:26.788771   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:26.788795   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:26.788806   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:26.788813   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:26.792188   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:27.288499   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:27.288520   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:27.288528   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:27.288532   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:27.291613   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:27.789524   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:27.789548   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:27.789561   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:27.789570   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:27.793307   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:28.288669   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:28.288691   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:28.288699   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:28.288703   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:28.292413   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:28.789196   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:28.789221   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:28.789232   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:28.789240   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:28.792624   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:28.793214   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:29.288597   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:29.288620   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:29.288631   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:29.288641   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:29.291909   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:29.789374   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:29.789395   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:29.789402   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:29.789405   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:29.793097   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:30.289204   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:30.289228   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:30.289240   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:30.289246   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:30.292771   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:30.789490   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:30.789511   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:30.789518   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:30.789523   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:30.793133   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:30.793741   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:31.289202   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:31.289226   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:31.289232   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:31.289236   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:31.292290   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:31.789043   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:31.789066   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:31.789073   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:31.789076   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:31.792378   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:32.288973   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:32.288993   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:32.289001   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:32.289005   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:32.292148   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:32.788928   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:32.788971   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:32.788982   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:32.788987   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:32.792255   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:33.288589   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:33.288609   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:33.288619   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:33.288624   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:33.291393   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:33.292006   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:33.789035   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:33.789057   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:33.789064   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:33.789068   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:33.792158   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:34.289171   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:34.289195   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:34.289204   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:34.289211   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:34.293745   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:33:34.788898   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:34.788922   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:34.788934   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:34.788940   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:34.792432   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:35.289082   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:35.289102   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:35.289110   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:35.289113   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:35.292225   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:35.292913   32725 node_ready.go:53] node "ha-174628-m03" has status "Ready":"False"
	I0717 17:33:35.788743   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:35.788769   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:35.788781   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:35.788810   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:35.791885   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:36.288995   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:36.289023   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:36.289032   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:36.289037   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:36.292342   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:36.789265   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:36.789290   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:36.789298   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:36.789318   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:36.792603   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:37.288510   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:37.288538   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.288550   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.288556   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.292041   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:37.788631   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:37.788653   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.788663   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.788669   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.791737   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:37.792484   32725 node_ready.go:49] node "ha-174628-m03" has status "Ready":"True"
	I0717 17:33:37.792504   32725 node_ready.go:38] duration metric: took 17.504173412s for node "ha-174628-m03" to be "Ready" ...
	I0717 17:33:37.792513   32725 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 17:33:37.792585   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:33:37.792598   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.792623   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.792632   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.800718   32725 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 17:33:37.808766   32725 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ljjl7" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.808834   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-ljjl7
	I0717 17:33:37.808842   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.808849   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.808855   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.811798   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:37.812761   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:37.812775   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.812783   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.812788   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.815155   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:37.815818   32725 pod_ready.go:92] pod "coredns-7db6d8ff4d-ljjl7" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:37.815836   32725 pod_ready.go:81] duration metric: took 7.048784ms for pod "coredns-7db6d8ff4d-ljjl7" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.815848   32725 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nb567" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.815908   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nb567
	I0717 17:33:37.815919   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.815929   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.815938   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.818715   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:37.819439   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:37.819456   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.819466   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.819472   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.821430   32725 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 17:33:37.821860   32725 pod_ready.go:92] pod "coredns-7db6d8ff4d-nb567" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:37.821873   32725 pod_ready.go:81] duration metric: took 6.018832ms for pod "coredns-7db6d8ff4d-nb567" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.821884   32725 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.821934   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174628
	I0717 17:33:37.821941   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.821948   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.821955   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.823945   32725 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 17:33:37.824397   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:37.824411   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.824420   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.824428   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.826558   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:37.827099   32725 pod_ready.go:92] pod "etcd-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:37.827115   32725 pod_ready.go:81] duration metric: took 5.22081ms for pod "etcd-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.827125   32725 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.827176   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174628-m02
	I0717 17:33:37.827187   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.827197   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.827204   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.829485   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:37.830061   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:37.830076   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.830087   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.830092   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.832117   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:37.832485   32725 pod_ready.go:92] pod "etcd-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:37.832501   32725 pod_ready.go:81] duration metric: took 5.37018ms for pod "etcd-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.832509   32725 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:37.988954   32725 request.go:629] Waited for 156.367687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174628-m03
	I0717 17:33:37.989014   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/etcd-ha-174628-m03
	I0717 17:33:37.989021   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:37.989035   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:37.989042   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:37.992402   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:38.189553   32725 request.go:629] Waited for 196.312101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:38.189608   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:38.189613   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:38.189620   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:38.189623   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:38.192602   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:38.193114   32725 pod_ready.go:92] pod "etcd-ha-174628-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:38.193131   32725 pod_ready.go:81] duration metric: took 360.615576ms for pod "etcd-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:38.193146   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:38.389397   32725 request.go:629] Waited for 196.170987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628
	I0717 17:33:38.389468   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628
	I0717 17:33:38.389476   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:38.389483   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:38.389491   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:38.392753   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:38.589173   32725 request.go:629] Waited for 195.758114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:38.589234   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:38.589255   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:38.589269   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:38.589276   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:38.592906   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:38.593635   32725 pod_ready.go:92] pod "kube-apiserver-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:38.593653   32725 pod_ready.go:81] duration metric: took 400.501461ms for pod "kube-apiserver-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:38.593666   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:38.788628   32725 request.go:629] Waited for 194.886624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628-m02
	I0717 17:33:38.788711   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628-m02
	I0717 17:33:38.788723   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:38.788737   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:38.788746   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:38.791882   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:38.988998   32725 request.go:629] Waited for 196.377182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:38.989057   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:38.989082   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:38.989090   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:38.989097   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:38.992029   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:38.992576   32725 pod_ready.go:92] pod "kube-apiserver-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:38.992597   32725 pod_ready.go:81] duration metric: took 398.922342ms for pod "kube-apiserver-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:38.992606   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:39.189599   32725 request.go:629] Waited for 196.906829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628-m03
	I0717 17:33:39.189654   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-174628-m03
	I0717 17:33:39.189659   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:39.189666   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:39.189670   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:39.192838   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:39.389338   32725 request.go:629] Waited for 195.774064ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:39.389420   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:39.389428   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:39.389438   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:39.389447   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:39.392631   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:39.393101   32725 pod_ready.go:92] pod "kube-apiserver-ha-174628-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:39.393119   32725 pod_ready.go:81] duration metric: took 400.507241ms for pod "kube-apiserver-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:39.393128   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:39.589555   32725 request.go:629] Waited for 196.347589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628
	I0717 17:33:39.589608   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628
	I0717 17:33:39.589613   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:39.589620   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:39.589625   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:39.593686   32725 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 17:33:39.788964   32725 request.go:629] Waited for 194.35981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:39.789030   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:39.789038   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:39.789046   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:39.789054   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:39.791748   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:39.792265   32725 pod_ready.go:92] pod "kube-controller-manager-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:39.792284   32725 pod_ready.go:81] duration metric: took 399.148814ms for pod "kube-controller-manager-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:39.792296   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:39.989380   32725 request.go:629] Waited for 197.009462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628-m02
	I0717 17:33:39.989451   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628-m02
	I0717 17:33:39.989456   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:39.989463   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:39.989467   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:39.992676   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:40.188963   32725 request.go:629] Waited for 195.353988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:40.189020   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:40.189026   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:40.189033   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:40.189037   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:40.191916   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:40.192530   32725 pod_ready.go:92] pod "kube-controller-manager-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:40.192547   32725 pod_ready.go:81] duration metric: took 400.243601ms for pod "kube-controller-manager-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:40.192560   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:40.389113   32725 request.go:629] Waited for 196.485647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628-m03
	I0717 17:33:40.389196   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-174628-m03
	I0717 17:33:40.389208   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:40.389220   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:40.389232   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:40.392007   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:40.589259   32725 request.go:629] Waited for 196.36758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:40.589350   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:40.589363   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:40.589375   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:40.589383   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:40.593013   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:40.593724   32725 pod_ready.go:92] pod "kube-controller-manager-ha-174628-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:40.593743   32725 pod_ready.go:81] duration metric: took 401.175109ms for pod "kube-controller-manager-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:40.593753   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7lchn" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:40.788707   32725 request.go:629] Waited for 194.878686ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lchn
	I0717 17:33:40.788775   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7lchn
	I0717 17:33:40.788783   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:40.788794   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:40.788803   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:40.792019   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:40.989250   32725 request.go:629] Waited for 196.366888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:40.989312   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:40.989320   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:40.989330   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:40.989338   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:40.992429   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:40.992975   32725 pod_ready.go:92] pod "kube-proxy-7lchn" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:40.992999   32725 pod_ready.go:81] duration metric: took 399.240857ms for pod "kube-proxy-7lchn" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:40.993009   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fqf9q" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:41.189116   32725 request.go:629] Waited for 196.047614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fqf9q
	I0717 17:33:41.189234   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fqf9q
	I0717 17:33:41.189247   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:41.189259   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:41.189269   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:41.193100   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:41.388968   32725 request.go:629] Waited for 195.125881ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:41.389033   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:41.389038   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:41.389046   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:41.389050   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:41.392715   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:41.393180   32725 pod_ready.go:92] pod "kube-proxy-fqf9q" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:41.393203   32725 pod_ready.go:81] duration metric: took 400.188353ms for pod "kube-proxy-fqf9q" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:41.393213   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tjkww" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:41.588629   32725 request.go:629] Waited for 195.34713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjkww
	I0717 17:33:41.588719   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tjkww
	I0717 17:33:41.588729   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:41.588737   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:41.588743   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:41.591926   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:41.788928   32725 request.go:629] Waited for 196.346277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:41.788982   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:41.788987   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:41.788994   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:41.788997   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:41.792516   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:41.793183   32725 pod_ready.go:92] pod "kube-proxy-tjkww" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:41.793202   32725 pod_ready.go:81] duration metric: took 399.97971ms for pod "kube-proxy-tjkww" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:41.793213   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:41.989311   32725 request.go:629] Waited for 196.032696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628
	I0717 17:33:41.989373   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628
	I0717 17:33:41.989379   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:41.989387   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:41.989396   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:41.992762   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:42.189600   32725 request.go:629] Waited for 196.361839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:42.189658   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628
	I0717 17:33:42.189663   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:42.189671   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:42.189677   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:42.192370   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:42.192995   32725 pod_ready.go:92] pod "kube-scheduler-ha-174628" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:42.193014   32725 pod_ready.go:81] duration metric: took 399.792549ms for pod "kube-scheduler-ha-174628" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:42.193026   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:42.389023   32725 request.go:629] Waited for 195.940626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628-m02
	I0717 17:33:42.389099   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628-m02
	I0717 17:33:42.389106   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:42.389117   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:42.389129   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:42.392155   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:42.589175   32725 request.go:629] Waited for 196.356601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:42.589243   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m02
	I0717 17:33:42.589251   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:42.589263   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:42.589275   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:42.592976   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:42.593570   32725 pod_ready.go:92] pod "kube-scheduler-ha-174628-m02" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:42.593588   32725 pod_ready.go:81] duration metric: took 400.555408ms for pod "kube-scheduler-ha-174628-m02" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:42.593598   32725 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:42.789676   32725 request.go:629] Waited for 195.992794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628-m03
	I0717 17:33:42.789728   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-174628-m03
	I0717 17:33:42.789733   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:42.789740   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:42.789746   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:42.793492   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:42.989430   32725 request.go:629] Waited for 195.274096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:42.989494   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes/ha-174628-m03
	I0717 17:33:42.989502   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:42.989515   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:42.989526   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:42.992848   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:42.993560   32725 pod_ready.go:92] pod "kube-scheduler-ha-174628-m03" in "kube-system" namespace has status "Ready":"True"
	I0717 17:33:42.993580   32725 pod_ready.go:81] duration metric: took 399.973603ms for pod "kube-scheduler-ha-174628-m03" in "kube-system" namespace to be "Ready" ...
	I0717 17:33:42.993593   32725 pod_ready.go:38] duration metric: took 5.201069604s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 17:33:42.993616   32725 api_server.go:52] waiting for apiserver process to appear ...
	I0717 17:33:42.993679   32725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:33:43.011575   32725 api_server.go:72] duration metric: took 22.974885333s to wait for apiserver process to appear ...
	I0717 17:33:43.011599   32725 api_server.go:88] waiting for apiserver healthz status ...
	I0717 17:33:43.011616   32725 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0717 17:33:43.018029   32725 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0717 17:33:43.018106   32725 round_trippers.go:463] GET https://192.168.39.100:8443/version
	I0717 17:33:43.018116   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:43.018129   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:43.018138   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:43.019123   32725 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 17:33:43.019262   32725 api_server.go:141] control plane version: v1.30.2
	I0717 17:33:43.019282   32725 api_server.go:131] duration metric: took 7.675984ms to wait for apiserver health ...
	I0717 17:33:43.019293   32725 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 17:33:43.188619   32725 request.go:629] Waited for 169.253857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:33:43.188678   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:33:43.188683   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:43.188701   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:43.188705   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:43.196310   32725 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 17:33:43.202224   32725 system_pods.go:59] 24 kube-system pods found
	I0717 17:33:43.202249   32725 system_pods.go:61] "coredns-7db6d8ff4d-ljjl7" [2c4857a1-6ccd-4122-80b5-f5bcfd2e307f] Running
	I0717 17:33:43.202254   32725 system_pods.go:61] "coredns-7db6d8ff4d-nb567" [1739ac64-be05-4438-9a8f-a0d2821a1650] Running
	I0717 17:33:43.202257   32725 system_pods.go:61] "etcd-ha-174628" [005dbd48-14a2-458a-a8b3-252696a4ce85] Running
	I0717 17:33:43.202261   32725 system_pods.go:61] "etcd-ha-174628-m02" [6598f8f5-41df-46a9-bb82-fcf2ad182e60] Running
	I0717 17:33:43.202265   32725 system_pods.go:61] "etcd-ha-174628-m03" [6b96cf8d-24de-45b7-90d1-ebde3d5a9f7c] Running
	I0717 17:33:43.202269   32725 system_pods.go:61] "kindnet-79txz" [8c09c315-591a-4835-a433-f3bc3283f305] Running
	I0717 17:33:43.202272   32725 system_pods.go:61] "kindnet-k6jnp" [9bca93ed-aca5-4540-990c-d9e6209d12d0] Running
	I0717 17:33:43.202274   32725 system_pods.go:61] "kindnet-p7tg6" [56af22ef-0bcb-42a1-8976-117b288ef240] Running
	I0717 17:33:43.202278   32725 system_pods.go:61] "kube-apiserver-ha-174628" [3f169484-b9b1-4be6-abec-2309c0bfecba] Running
	I0717 17:33:43.202281   32725 system_pods.go:61] "kube-apiserver-ha-174628-m02" [316d349c-f099-45c3-a9ab-34fbcaeaae02] Running
	I0717 17:33:43.202284   32725 system_pods.go:61] "kube-apiserver-ha-174628-m03" [1ac2a7b1-cfbd-4e77-8711-4c82792e0cd9] Running
	I0717 17:33:43.202288   32725 system_pods.go:61] "kube-controller-manager-ha-174628" [ea259b8d-9fcb-4fb1-9e32-75d6a47e44ed] Running
	I0717 17:33:43.202293   32725 system_pods.go:61] "kube-controller-manager-ha-174628-m02" [0374a405-7fb7-4367-997e-0ac06d57338d] Running
	I0717 17:33:43.202296   32725 system_pods.go:61] "kube-controller-manager-ha-174628-m03" [c5276fed-d860-4710-992e-a1b5ec2a69c0] Running
	I0717 17:33:43.202302   32725 system_pods.go:61] "kube-proxy-7lchn" [a01b695f-ec8b-4727-9c82-4251aa34d682] Running
	I0717 17:33:43.202305   32725 system_pods.go:61] "kube-proxy-fqf9q" [f74d57a9-38a2-464d-991f-fc8905fdbe3f] Running
	I0717 17:33:43.202311   32725 system_pods.go:61] "kube-proxy-tjkww" [d50b5e14-72c3-4338-9429-40764e58ca45] Running
	I0717 17:33:43.202314   32725 system_pods.go:61] "kube-scheduler-ha-174628" [1776b347-cc13-44da-a60a-199bdb85d2c2] Running
	I0717 17:33:43.202317   32725 system_pods.go:61] "kube-scheduler-ha-174628-m02" [ce3683eb-351e-40d4-a704-13dfddc2bdea] Running
	I0717 17:33:43.202322   32725 system_pods.go:61] "kube-scheduler-ha-174628-m03" [d0a0a6ad-1daf-4991-a330-2facbd6d0f7f] Running
	I0717 17:33:43.202325   32725 system_pods.go:61] "kube-vip-ha-174628" [b2d62768-e68e-4ce3-ad84-31ddac00688e] Running
	I0717 17:33:43.202327   32725 system_pods.go:61] "kube-vip-ha-174628-m02" [a6656a18-6176-4291-a094-e4b942e9ba1c] Running
	I0717 17:33:43.202330   32725 system_pods.go:61] "kube-vip-ha-174628-m03" [e77aed0c-76e0-4a43-bcc2-f4c96b7d3b37] Running
	I0717 17:33:43.202334   32725 system_pods.go:61] "storage-provisioner" [8c0601bb-36f6-434d-8e9d-1e326bf682f5] Running
	I0717 17:33:43.202345   32725 system_pods.go:74] duration metric: took 183.046597ms to wait for pod list to return data ...
	I0717 17:33:43.202356   32725 default_sa.go:34] waiting for default service account to be created ...
	I0717 17:33:43.388703   32725 request.go:629] Waited for 186.278998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0717 17:33:43.388765   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/default/serviceaccounts
	I0717 17:33:43.388771   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:43.388777   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:43.388784   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:43.391520   32725 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 17:33:43.391645   32725 default_sa.go:45] found service account: "default"
	I0717 17:33:43.391660   32725 default_sa.go:55] duration metric: took 189.299052ms for default service account to be created ...
	I0717 17:33:43.391668   32725 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 17:33:43.588885   32725 request.go:629] Waited for 197.151009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:33:43.588961   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/namespaces/kube-system/pods
	I0717 17:33:43.588981   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:43.588992   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:43.589000   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:43.595325   32725 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 17:33:43.601693   32725 system_pods.go:86] 24 kube-system pods found
	I0717 17:33:43.601716   32725 system_pods.go:89] "coredns-7db6d8ff4d-ljjl7" [2c4857a1-6ccd-4122-80b5-f5bcfd2e307f] Running
	I0717 17:33:43.601722   32725 system_pods.go:89] "coredns-7db6d8ff4d-nb567" [1739ac64-be05-4438-9a8f-a0d2821a1650] Running
	I0717 17:33:43.601727   32725 system_pods.go:89] "etcd-ha-174628" [005dbd48-14a2-458a-a8b3-252696a4ce85] Running
	I0717 17:33:43.601732   32725 system_pods.go:89] "etcd-ha-174628-m02" [6598f8f5-41df-46a9-bb82-fcf2ad182e60] Running
	I0717 17:33:43.601736   32725 system_pods.go:89] "etcd-ha-174628-m03" [6b96cf8d-24de-45b7-90d1-ebde3d5a9f7c] Running
	I0717 17:33:43.601741   32725 system_pods.go:89] "kindnet-79txz" [8c09c315-591a-4835-a433-f3bc3283f305] Running
	I0717 17:33:43.601745   32725 system_pods.go:89] "kindnet-k6jnp" [9bca93ed-aca5-4540-990c-d9e6209d12d0] Running
	I0717 17:33:43.601750   32725 system_pods.go:89] "kindnet-p7tg6" [56af22ef-0bcb-42a1-8976-117b288ef240] Running
	I0717 17:33:43.601759   32725 system_pods.go:89] "kube-apiserver-ha-174628" [3f169484-b9b1-4be6-abec-2309c0bfecba] Running
	I0717 17:33:43.601765   32725 system_pods.go:89] "kube-apiserver-ha-174628-m02" [316d349c-f099-45c3-a9ab-34fbcaeaae02] Running
	I0717 17:33:43.601775   32725 system_pods.go:89] "kube-apiserver-ha-174628-m03" [1ac2a7b1-cfbd-4e77-8711-4c82792e0cd9] Running
	I0717 17:33:43.601782   32725 system_pods.go:89] "kube-controller-manager-ha-174628" [ea259b8d-9fcb-4fb1-9e32-75d6a47e44ed] Running
	I0717 17:33:43.601789   32725 system_pods.go:89] "kube-controller-manager-ha-174628-m02" [0374a405-7fb7-4367-997e-0ac06d57338d] Running
	I0717 17:33:43.601797   32725 system_pods.go:89] "kube-controller-manager-ha-174628-m03" [c5276fed-d860-4710-992e-a1b5ec2a69c0] Running
	I0717 17:33:43.601803   32725 system_pods.go:89] "kube-proxy-7lchn" [a01b695f-ec8b-4727-9c82-4251aa34d682] Running
	I0717 17:33:43.601811   32725 system_pods.go:89] "kube-proxy-fqf9q" [f74d57a9-38a2-464d-991f-fc8905fdbe3f] Running
	I0717 17:33:43.601817   32725 system_pods.go:89] "kube-proxy-tjkww" [d50b5e14-72c3-4338-9429-40764e58ca45] Running
	I0717 17:33:43.601825   32725 system_pods.go:89] "kube-scheduler-ha-174628" [1776b347-cc13-44da-a60a-199bdb85d2c2] Running
	I0717 17:33:43.601831   32725 system_pods.go:89] "kube-scheduler-ha-174628-m02" [ce3683eb-351e-40d4-a704-13dfddc2bdea] Running
	I0717 17:33:43.601840   32725 system_pods.go:89] "kube-scheduler-ha-174628-m03" [d0a0a6ad-1daf-4991-a330-2facbd6d0f7f] Running
	I0717 17:33:43.601846   32725 system_pods.go:89] "kube-vip-ha-174628" [b2d62768-e68e-4ce3-ad84-31ddac00688e] Running
	I0717 17:33:43.601852   32725 system_pods.go:89] "kube-vip-ha-174628-m02" [a6656a18-6176-4291-a094-e4b942e9ba1c] Running
	I0717 17:33:43.601857   32725 system_pods.go:89] "kube-vip-ha-174628-m03" [e77aed0c-76e0-4a43-bcc2-f4c96b7d3b37] Running
	I0717 17:33:43.601866   32725 system_pods.go:89] "storage-provisioner" [8c0601bb-36f6-434d-8e9d-1e326bf682f5] Running
	I0717 17:33:43.601875   32725 system_pods.go:126] duration metric: took 210.197708ms to wait for k8s-apps to be running ...
	I0717 17:33:43.601887   32725 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 17:33:43.601940   32725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:33:43.620097   32725 system_svc.go:56] duration metric: took 18.203606ms WaitForService to wait for kubelet
	I0717 17:33:43.620126   32725 kubeadm.go:582] duration metric: took 23.583439388s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 17:33:43.620150   32725 node_conditions.go:102] verifying NodePressure condition ...
	I0717 17:33:43.789600   32725 request.go:629] Waited for 169.359963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.100:8443/api/v1/nodes
	I0717 17:33:43.789652   32725 round_trippers.go:463] GET https://192.168.39.100:8443/api/v1/nodes
	I0717 17:33:43.789658   32725 round_trippers.go:469] Request Headers:
	I0717 17:33:43.789665   32725 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 17:33:43.789671   32725 round_trippers.go:473]     Accept: application/json, */*
	I0717 17:33:43.793220   32725 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 17:33:43.794230   32725 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 17:33:43.794252   32725 node_conditions.go:123] node cpu capacity is 2
	I0717 17:33:43.794269   32725 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 17:33:43.794272   32725 node_conditions.go:123] node cpu capacity is 2
	I0717 17:33:43.794276   32725 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 17:33:43.794279   32725 node_conditions.go:123] node cpu capacity is 2
	I0717 17:33:43.794284   32725 node_conditions.go:105] duration metric: took 174.129056ms to run NodePressure ...
	I0717 17:33:43.794297   32725 start.go:241] waiting for startup goroutines ...
	I0717 17:33:43.794319   32725 start.go:255] writing updated cluster config ...
	I0717 17:33:43.794591   32725 ssh_runner.go:195] Run: rm -f paused
	I0717 17:33:43.845016   32725 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 17:33:43.847074   32725 out.go:177] * Done! kubectl is now configured to use "ha-174628" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.738234068Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721237897738201301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a24e4ac-dad3-4c73-8449-a410a7d951d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.741526507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aba298c6-2a04-4574-8002-2cde75e2bbf9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.741632979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aba298c6-2a04-4574-8002-2cde75e2bbf9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.741944769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721237628009895023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69af3791a58f6cd70f065a41e9453615e39f8d6b52615b6b10a22f9276870e64,PodSandboxId:ead8e0797918ab3cc149e030c67415a6da028f6e6438255003e750e47ddd1dd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721237422018013934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421982314266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421928423514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be
05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721237410216056539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172123740
6540056768,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370441d5e9e25be3ceff0e96f53875a159099004aa797d2570be4e3e61aa9e59,PodSandboxId:9743622035ce2bd2b9a6be8681bb69bbbf89e91f886672970a9ed528068ed1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17212373905
30419999,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70628d083fb6fd792a0e57561bb9973,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721237387075364617,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721237387046609217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9880796029aa2ee7897660b3ccd40a039526e26c4b0208d087876a8ed4a6e3dd,PodSandboxId:a5ac70b85a0a2a94429dd2f26d17401062eb6fb6872bba08b142d9e10c1dc17a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721237387002758616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb0842f9354fc3963cae2902decd174b028cb857227fdf23844b2da6a7c01ac,PodSandboxId:793e0c3a8ff473b98d0fb8e714880ffefbed7e002c92bfdae5801f1e5cac505c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721237386972165967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aba298c6-2a04-4574-8002-2cde75e2bbf9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.776761593Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82a61375-6459-44e4-8ec6-0027b5d32c08 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.776844341Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82a61375-6459-44e4-8ec6-0027b5d32c08 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.778276506Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa980d74-3e5c-460a-a847-d676f7157103 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.778765572Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721237897778742644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa980d74-3e5c-460a-a847-d676f7157103 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.779226368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62afe202-2906-4619-b8ff-dc602f143d86 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.779472417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62afe202-2906-4619-b8ff-dc602f143d86 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.779760535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721237628009895023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69af3791a58f6cd70f065a41e9453615e39f8d6b52615b6b10a22f9276870e64,PodSandboxId:ead8e0797918ab3cc149e030c67415a6da028f6e6438255003e750e47ddd1dd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721237422018013934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421982314266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421928423514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be
05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721237410216056539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172123740
6540056768,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370441d5e9e25be3ceff0e96f53875a159099004aa797d2570be4e3e61aa9e59,PodSandboxId:9743622035ce2bd2b9a6be8681bb69bbbf89e91f886672970a9ed528068ed1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17212373905
30419999,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70628d083fb6fd792a0e57561bb9973,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721237387075364617,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721237387046609217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9880796029aa2ee7897660b3ccd40a039526e26c4b0208d087876a8ed4a6e3dd,PodSandboxId:a5ac70b85a0a2a94429dd2f26d17401062eb6fb6872bba08b142d9e10c1dc17a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721237387002758616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb0842f9354fc3963cae2902decd174b028cb857227fdf23844b2da6a7c01ac,PodSandboxId:793e0c3a8ff473b98d0fb8e714880ffefbed7e002c92bfdae5801f1e5cac505c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721237386972165967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62afe202-2906-4619-b8ff-dc602f143d86 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.817177792Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a2a5bfd-42ae-408f-a867-dc83a3f71271 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.817350697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a2a5bfd-42ae-408f-a867-dc83a3f71271 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.818184173Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3618adb-5425-4da0-b7ac-9c77fd9f9b35 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.818764047Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721237897818738928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3618adb-5425-4da0-b7ac-9c77fd9f9b35 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.819211341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=011d93ef-75a1-4647-9f68-9ea39f6f7500 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.819273136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=011d93ef-75a1-4647-9f68-9ea39f6f7500 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.819565581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721237628009895023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69af3791a58f6cd70f065a41e9453615e39f8d6b52615b6b10a22f9276870e64,PodSandboxId:ead8e0797918ab3cc149e030c67415a6da028f6e6438255003e750e47ddd1dd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721237422018013934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421982314266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421928423514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be
05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721237410216056539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172123740
6540056768,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370441d5e9e25be3ceff0e96f53875a159099004aa797d2570be4e3e61aa9e59,PodSandboxId:9743622035ce2bd2b9a6be8681bb69bbbf89e91f886672970a9ed528068ed1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17212373905
30419999,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70628d083fb6fd792a0e57561bb9973,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721237387075364617,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721237387046609217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9880796029aa2ee7897660b3ccd40a039526e26c4b0208d087876a8ed4a6e3dd,PodSandboxId:a5ac70b85a0a2a94429dd2f26d17401062eb6fb6872bba08b142d9e10c1dc17a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721237387002758616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb0842f9354fc3963cae2902decd174b028cb857227fdf23844b2da6a7c01ac,PodSandboxId:793e0c3a8ff473b98d0fb8e714880ffefbed7e002c92bfdae5801f1e5cac505c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721237386972165967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=011d93ef-75a1-4647-9f68-9ea39f6f7500 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.861471548Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2dab79f-7746-410b-8313-caf5ba38a58d name=/runtime.v1.RuntimeService/Version
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.861642125Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2dab79f-7746-410b-8313-caf5ba38a58d name=/runtime.v1.RuntimeService/Version
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.862527085Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=946eebf3-b083-4890-8f6f-c8ca20df3871 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.863056441Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721237897863033900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=946eebf3-b083-4890-8f6f-c8ca20df3871 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.863438189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ee6332f-20a6-4d94-bb29-e5a11c9fecd5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.863497209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ee6332f-20a6-4d94-bb29-e5a11c9fecd5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:38:17 ha-174628 crio[675]: time="2024-07-17 17:38:17.863774982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721237628009895023,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69af3791a58f6cd70f065a41e9453615e39f8d6b52615b6b10a22f9276870e64,PodSandboxId:ead8e0797918ab3cc149e030c67415a6da028f6e6438255003e750e47ddd1dd7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721237422018013934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421982314266,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721237421928423514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be
05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CO
NTAINER_RUNNING,CreatedAt:1721237410216056539,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:172123740
6540056768,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:370441d5e9e25be3ceff0e96f53875a159099004aa797d2570be4e3e61aa9e59,PodSandboxId:9743622035ce2bd2b9a6be8681bb69bbbf89e91f886672970a9ed528068ed1cc,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17212373905
30419999,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b70628d083fb6fd792a0e57561bb9973,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721237387075364617,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721237387046609217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9880796029aa2ee7897660b3ccd40a039526e26c4b0208d087876a8ed4a6e3dd,PodSandboxId:a5ac70b85a0a2a94429dd2f26d17401062eb6fb6872bba08b142d9e10c1dc17a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721237387002758616,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbb0842f9354fc3963cae2902decd174b028cb857227fdf23844b2da6a7c01ac,PodSandboxId:793e0c3a8ff473b98d0fb8e714880ffefbed7e002c92bfdae5801f1e5cac505c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721237386972165967,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ee6332f-20a6-4d94-bb29-e5a11c9fecd5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	88ba3b0cb3105       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   c4d7c5b8a369b       busybox-fc5497c4f-8zv26
	69af3791a58f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   ead8e0797918a       storage-provisioner
	976aeedd4a51e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   6732d32de6a25       coredns-7db6d8ff4d-ljjl7
	97987539971dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   9ca7e3b66f8e6       coredns-7db6d8ff4d-nb567
	2fefa59bf46cd       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    8 minutes ago       Running             kindnet-cni               0                   db21995c3cb31       kindnet-k6jnp
	d139046cefa3a       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      8 minutes ago       Running             kube-proxy                0                   4b7a03b7f681c       kube-proxy-fqf9q
	370441d5e9e25       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   9743622035ce2       kube-vip-ha-174628
	e1c91b7db4ab1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   d488537da1381       etcd-ha-174628
	889d28a83e85b       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      8 minutes ago       Running             kube-scheduler            0                   4c7f495eb3d6a       kube-scheduler-ha-174628
	9880796029aa2       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      8 minutes ago       Running             kube-controller-manager   0                   a5ac70b85a0a2       kube-controller-manager-ha-174628
	dbb0842f9354f       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      8 minutes ago       Running             kube-apiserver            0                   793e0c3a8ff47       kube-apiserver-ha-174628
	
	
	==> coredns [976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9] <==
	[INFO] 10.244.2.2:33788 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095709s
	[INFO] 10.244.0.4:46982 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154802s
	[INFO] 10.244.0.4:56230 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001992689s
	[INFO] 10.244.0.4:41627 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0003841s
	[INFO] 10.244.0.4:58911 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000145s
	[INFO] 10.244.0.4:42628 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001405769s
	[INFO] 10.244.0.4:53106 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132475s
	[INFO] 10.244.1.2:56143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010532s
	[INFO] 10.244.1.2:57864 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093166s
	[INFO] 10.244.1.2:36333 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127244s
	[INFO] 10.244.1.2:59545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001305574s
	[INFO] 10.244.1.2:38967 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068655s
	[INFO] 10.244.2.2:42756 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113607s
	[INFO] 10.244.2.2:43563 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069199s
	[INFO] 10.244.0.4:59480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109399s
	[INFO] 10.244.0.4:42046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068182s
	[INFO] 10.244.0.4:52729 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087202s
	[INFO] 10.244.1.2:54148 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075008s
	[INFO] 10.244.2.2:34613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101677s
	[INFO] 10.244.2.2:34221 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203479s
	[INFO] 10.244.0.4:35705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081127s
	[INFO] 10.244.0.4:36734 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090761s
	[INFO] 10.244.1.2:34328 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093559s
	[INFO] 10.244.1.2:39930 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149652s
	[INFO] 10.244.1.2:55584 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101975s
	
	
	==> coredns [97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb] <==
	[INFO] 10.244.2.2:56026 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.004666156s
	[INFO] 10.244.2.2:42295 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.012320218s
	[INFO] 10.244.2.2:36255 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.006059425s
	[INFO] 10.244.0.4:34085 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000090919s
	[INFO] 10.244.0.4:39117 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001872218s
	[INFO] 10.244.1.2:51622 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00157319s
	[INFO] 10.244.2.2:60810 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001843s
	[INFO] 10.244.2.2:59317 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00028437s
	[INFO] 10.244.2.2:38028 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131271s
	[INFO] 10.244.0.4:34076 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171504s
	[INFO] 10.244.0.4:47718 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126429s
	[INFO] 10.244.1.2:45110 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001972368s
	[INFO] 10.244.1.2:56072 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000151997s
	[INFO] 10.244.1.2:56149 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091586s
	[INFO] 10.244.2.2:58101 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116587s
	[INFO] 10.244.2.2:38105 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059217s
	[INFO] 10.244.0.4:33680 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067251s
	[INFO] 10.244.1.2:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149516s
	[INFO] 10.244.1.2:49668 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120356s
	[INFO] 10.244.1.2:39442 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065763s
	[INFO] 10.244.2.2:49955 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116571s
	[INFO] 10.244.2.2:46651 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013941s
	[INFO] 10.244.0.4:39128 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097533s
	[INFO] 10.244.0.4:36840 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000042262s
	[INFO] 10.244.1.2:36575 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084857s
	
	
	==> describe nodes <==
	Name:               ha-174628
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T17_29_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:29:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:38:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:33:58 +0000   Wed, 17 Jul 2024 17:29:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:33:58 +0000   Wed, 17 Jul 2024 17:29:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:33:58 +0000   Wed, 17 Jul 2024 17:29:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:33:58 +0000   Wed, 17 Jul 2024 17:30:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-174628
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 38d679c72879470c96b5b9e9677b521d
	  System UUID:                38d679c7-2879-470c-96b5-b9e9677b521d
	  Boot ID:                    dc99f06a-b6ac-4ceb-b149-a41be92c5af1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8zv26              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 coredns-7db6d8ff4d-ljjl7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m12s
	  kube-system                 coredns-7db6d8ff4d-nb567             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m12s
	  kube-system                 etcd-ha-174628                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m27s
	  kube-system                 kindnet-k6jnp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m13s
	  kube-system                 kube-apiserver-ha-174628             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 kube-controller-manager-ha-174628    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 kube-proxy-fqf9q                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-scheduler-ha-174628             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 kube-vip-ha-174628                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m11s  kube-proxy       
	  Normal  Starting                 8m25s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m25s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m25s  kubelet          Node ha-174628 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m25s  kubelet          Node ha-174628 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m25s  kubelet          Node ha-174628 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m13s  node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Normal  NodeReady                7m57s  kubelet          Node ha-174628 status is now: NodeReady
	  Normal  RegisteredNode           5m57s  node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Normal  RegisteredNode           4m44s  node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	
	
	Name:               ha-174628-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T17_32_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:32:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:34:48 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 17:34:05 +0000   Wed, 17 Jul 2024 17:35:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 17:34:05 +0000   Wed, 17 Jul 2024 17:35:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 17:34:05 +0000   Wed, 17 Jul 2024 17:35:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 17:34:05 +0000   Wed, 17 Jul 2024 17:35:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-174628-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 903b989e686a4ab6b3e3c3b6b498bfac
	  System UUID:                903b989e-686a-4ab6-b3e3-c3b6b498bfac
	  Boot ID:                    90064014-f03d-439c-b564-d9933dddd6e9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ftgzz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 etcd-ha-174628-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m13s
	  kube-system                 kindnet-79txz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m15s
	  kube-system                 kube-apiserver-ha-174628-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-controller-manager-ha-174628-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-proxy-7lchn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-scheduler-ha-174628-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-vip-ha-174628-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m15s (x8 over 6m15s)  kubelet          Node ha-174628-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s (x8 over 6m15s)  kubelet          Node ha-174628-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s (x7 over 6m15s)  kubelet          Node ha-174628-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m13s                  node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  RegisteredNode           5m57s                  node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  NodeNotReady             2m49s                  node-controller  Node ha-174628-m02 status is now: NodeNotReady
	
	
	Name:               ha-174628-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T17_33_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:33:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:38:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:34:18 +0000   Wed, 17 Jul 2024 17:33:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:34:18 +0000   Wed, 17 Jul 2024 17:33:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:34:18 +0000   Wed, 17 Jul 2024 17:33:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:34:18 +0000   Wed, 17 Jul 2024 17:33:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    ha-174628-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e252934bd064e64b4b5442d8b76155e
	  System UUID:                7e252934-bd06-4e64-b4b5-442d8b76155e
	  Boot ID:                    1795f76b-0f9d-4731-97d8-e0a76fec4a3b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5mnv5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 etcd-ha-174628-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m1s
	  kube-system                 kindnet-p7tg6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m2s
	  kube-system                 kube-apiserver-ha-174628-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-controller-manager-ha-174628-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-proxy-tjkww                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-scheduler-ha-174628-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-vip-ha-174628-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m57s                kube-proxy       
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-174628-m03 event: Registered Node ha-174628-m03 in Controller
	  Normal  Starting                 5m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m2s (x2 over 5m2s)  kubelet          Node ha-174628-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x2 over 5m2s)  kubelet          Node ha-174628-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x2 over 5m2s)  kubelet          Node ha-174628-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m58s                node-controller  Node ha-174628-m03 event: Registered Node ha-174628-m03 in Controller
	  Normal  RegisteredNode           4m44s                node-controller  Node ha-174628-m03 event: Registered Node ha-174628-m03 in Controller
	  Normal  NodeReady                4m41s                kubelet          Node ha-174628-m03 status is now: NodeReady
	
	
	Name:               ha-174628-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T17_34_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:34:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:38:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:34:48 +0000   Wed, 17 Jul 2024 17:34:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:34:48 +0000   Wed, 17 Jul 2024 17:34:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:34:48 +0000   Wed, 17 Jul 2024 17:34:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:34:48 +0000   Wed, 17 Jul 2024 17:34:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-174628-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1beb916d1ab94a9e97732204939d8f7c
	  System UUID:                1beb916d-1ab9-4a9e-9773-2204939d8f7c
	  Boot ID:                    f498dee8-37a8-457e-b1f0-32546079d21b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pt58p       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m1s
	  kube-system                 kube-proxy-gb548    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m1s (x2 over 4m1s)  kubelet          Node ha-174628-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x2 over 4m1s)  kubelet          Node ha-174628-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x2 over 4m1s)  kubelet          Node ha-174628-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal  NodeReady                3m42s                kubelet          Node ha-174628-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul17 17:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050826] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.422835] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.691787] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.517319] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.259835] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.065539] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054802] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.175947] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.103995] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.251338] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.953322] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +4.318399] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +0.059032] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.943760] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.083790] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.749430] kauditd_printk_skb: 18 callbacks suppressed
	[Jul17 17:30] kauditd_printk_skb: 38 callbacks suppressed
	[Jul17 17:32] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147] <==
	{"level":"warn","ts":"2024-07-17T17:38:17.729047Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.085113Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.118111Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.124847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.128921Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.130048Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.139523Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.146339Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.152148Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.15612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.16134Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.169928Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.175787Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.18127Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.18433Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.187178Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.194197Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.200199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.20536Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.208453Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.211035Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.216227Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.221614Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.227195Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-17T17:38:18.231004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3276445ff8d31e34","from":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:38:18 up 8 min,  0 users,  load average: 0.09, 0.23, 0.15
	Linux ha-174628 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0] <==
	I0717 17:37:41.259123       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:37:51.260256       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:37:51.260307       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:37:51.260470       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:37:51.260496       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:37:51.260549       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:37:51.260564       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:37:51.260626       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:37:51.260645       1 main.go:303] handling current node
	I0717 17:38:01.265305       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:38:01.265425       1 main.go:303] handling current node
	I0717 17:38:01.265480       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:38:01.265500       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:38:01.265874       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:38:01.265941       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:38:01.266073       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:38:01.266096       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:38:11.258717       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:38:11.258876       1 main.go:303] handling current node
	I0717 17:38:11.258914       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:38:11.258935       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:38:11.259154       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:38:11.259179       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:38:11.259240       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:38:11.259258       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [dbb0842f9354fc3963cae2902decd174b028cb857227fdf23844b2da6a7c01ac] <==
	I0717 17:29:53.273012       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 17:29:53.304033       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 17:29:53.322260       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 17:30:05.278733       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0717 17:30:05.278733       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0717 17:30:05.829969       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0717 17:33:49.175346       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37834: use of closed network connection
	E0717 17:33:49.363385       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37856: use of closed network connection
	E0717 17:33:49.556989       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37866: use of closed network connection
	E0717 17:33:49.748583       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37896: use of closed network connection
	E0717 17:33:49.918003       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37904: use of closed network connection
	E0717 17:33:50.083798       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37908: use of closed network connection
	E0717 17:33:50.253740       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37926: use of closed network connection
	E0717 17:33:50.415077       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37950: use of closed network connection
	E0717 17:33:50.869201       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37996: use of closed network connection
	E0717 17:33:51.029232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38008: use of closed network connection
	E0717 17:33:51.205004       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38026: use of closed network connection
	E0717 17:33:51.367737       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38052: use of closed network connection
	E0717 17:33:51.543184       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38072: use of closed network connection
	E0717 17:33:51.723941       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:38082: use of closed network connection
	I0717 17:34:23.453636       1 trace.go:236] Trace[937411472]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:288ee792-0cd5-4997-aded-811d44e718b5,client:192.168.39.100,api-group:apps,api-version:v1,name:kube-proxy,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:daemonsets,scope:resource,url:/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy/status,user-agent:kube-controller-manager/v1.30.2 (linux/amd64) kubernetes/3968350/system:serviceaccount:kube-system:daemon-set-controller,verb:PUT (17-Jul-2024 17:34:22.922) (total time: 531ms):
	Trace[937411472]: ["GuaranteedUpdate etcd3" audit-id:288ee792-0cd5-4997-aded-811d44e718b5,key:/daemonsets/kube-system/kube-proxy,type:*apps.DaemonSet,resource:daemonsets.apps 531ms (17:34:22.922)
	Trace[937411472]:  ---"Txn call completed" 527ms (17:34:23.452)]
	Trace[937411472]: [531.584649ms] [531.584649ms] END
	W0717 17:35:11.709044       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.187]
	
	
	==> kube-controller-manager [9880796029aa2ee7897660b3ccd40a039526e26c4b0208d087876a8ed4a6e3dd] <==
	I0717 17:33:16.618143       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-174628-m03" podCIDRs=["10.244.2.0/24"]
	I0717 17:33:20.259471       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-174628-m03"
	I0717 17:33:44.738335       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.540453ms"
	I0717 17:33:44.842023       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.604384ms"
	I0717 17:33:45.030222       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="188.053637ms"
	I0717 17:33:45.068802       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.998741ms"
	I0717 17:33:45.068894       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.03µs"
	I0717 17:33:45.072815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.735µs"
	I0717 17:33:45.092123       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.091µs"
	I0717 17:33:45.250623       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.609533ms"
	I0717 17:33:45.250972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="165.428µs"
	I0717 17:33:46.533183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.696µs"
	I0717 17:33:48.145337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.446596ms"
	I0717 17:33:48.146334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.802µs"
	I0717 17:33:48.243381       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.536655ms"
	I0717 17:33:48.243470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.778µs"
	I0717 17:33:48.776597       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.216202ms"
	I0717 17:33:48.776849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.859µs"
	I0717 17:34:17.911538       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-174628-m04\" does not exist"
	I0717 17:34:17.945389       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-174628-m04" podCIDRs=["10.244.3.0/24"]
	I0717 17:34:20.289197       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-174628-m04"
	I0717 17:34:36.236310       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174628-m04"
	I0717 17:35:29.796271       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174628-m04"
	I0717 17:35:29.997017       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.571717ms"
	I0717 17:35:29.997101       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.06µs"
	
	
	==> kube-proxy [d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78] <==
	I0717 17:30:06.937763       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:30:06.958644       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0717 17:30:06.996584       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:30:06.996651       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:30:06.996751       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:30:06.999933       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:30:07.000394       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:30:07.000419       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:30:07.002402       1 config.go:192] "Starting service config controller"
	I0717 17:30:07.002636       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:30:07.002722       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:30:07.002729       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:30:07.003788       1 config.go:319] "Starting node config controller"
	I0717 17:30:07.003809       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:30:07.103596       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 17:30:07.103622       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:30:07.103965       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9] <==
	W0717 17:29:50.057042       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:29:50.057060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:29:50.057234       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:29:50.057245       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 17:29:50.058801       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:29:50.059137       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:29:50.909083       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 17:29:50.909138       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 17:29:50.973090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 17:29:50.973186       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 17:29:51.051258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:29:51.051559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:29:51.057052       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 17:29:51.057213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 17:29:51.212147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 17:29:51.212308       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 17:29:51.220191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:29:51.220590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:29:51.576445       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:29:51.576537       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 17:29:53.526765       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 17:34:18.004216       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pt58p\": pod kindnet-pt58p is already assigned to node \"ha-174628-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pt58p" node="ha-174628-m04"
	E0717 17:34:18.005897       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod ce812d5f-7672-4d13-ab08-9a75c9507d83(kube-system/kindnet-pt58p) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pt58p"
	E0717 17:34:18.005978       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pt58p\": pod kindnet-pt58p is already assigned to node \"ha-174628-m04\"" pod="kube-system/kindnet-pt58p"
	I0717 17:34:18.006011       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pt58p" node="ha-174628-m04"
	
	
	==> kubelet <==
	Jul 17 17:33:53 ha-174628 kubelet[1358]: E0717 17:33:53.208504    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:33:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:33:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:33:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:33:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:34:53 ha-174628 kubelet[1358]: E0717 17:34:53.208997    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:34:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:34:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:34:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:34:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:35:53 ha-174628 kubelet[1358]: E0717 17:35:53.208148    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:35:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:35:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:35:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:35:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:36:53 ha-174628 kubelet[1358]: E0717 17:36:53.209167    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:36:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:36:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:36:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:36:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:37:53 ha-174628 kubelet[1358]: E0717 17:37:53.208182    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:37:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:37:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:37:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:37:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174628 -n ha-174628
helpers_test.go:261: (dbg) Run:  kubectl --context ha-174628 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (57.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (372.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-174628 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-174628 -v=7 --alsologtostderr
E0717 17:38:21.395408   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:38:49.081026   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-174628 -v=7 --alsologtostderr: exit status 82 (2m1.773504366s)

                                                
                                                
-- stdout --
	* Stopping node "ha-174628-m04"  ...
	* Stopping node "ha-174628-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:38:19.654375   38817 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:38:19.654608   38817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:38:19.654620   38817 out.go:304] Setting ErrFile to fd 2...
	I0717 17:38:19.654625   38817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:38:19.654897   38817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:38:19.655207   38817 out.go:298] Setting JSON to false
	I0717 17:38:19.655315   38817 mustload.go:65] Loading cluster: ha-174628
	I0717 17:38:19.655715   38817 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:38:19.655794   38817 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:38:19.655967   38817 mustload.go:65] Loading cluster: ha-174628
	I0717 17:38:19.656141   38817 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:38:19.656171   38817 stop.go:39] StopHost: ha-174628-m04
	I0717 17:38:19.656587   38817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:19.656630   38817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:19.672273   38817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0717 17:38:19.672730   38817 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:19.673387   38817 main.go:141] libmachine: Using API Version  1
	I0717 17:38:19.673418   38817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:19.673786   38817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:19.676325   38817 out.go:177] * Stopping node "ha-174628-m04"  ...
	I0717 17:38:19.677649   38817 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 17:38:19.677679   38817 main.go:141] libmachine: (ha-174628-m04) Calling .DriverName
	I0717 17:38:19.677921   38817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 17:38:19.677945   38817 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHHostname
	I0717 17:38:19.680828   38817 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:38:19.681249   38817 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:34:05 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:38:19.681279   38817 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:38:19.681373   38817 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHPort
	I0717 17:38:19.681548   38817 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHKeyPath
	I0717 17:38:19.681687   38817 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHUsername
	I0717 17:38:19.681849   38817 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m04/id_rsa Username:docker}
	I0717 17:38:19.766927   38817 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 17:38:19.818649   38817 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 17:38:19.870678   38817 main.go:141] libmachine: Stopping "ha-174628-m04"...
	I0717 17:38:19.870715   38817 main.go:141] libmachine: (ha-174628-m04) Calling .GetState
	I0717 17:38:19.872288   38817 main.go:141] libmachine: (ha-174628-m04) Calling .Stop
	I0717 17:38:19.875873   38817 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 0/120
	I0717 17:38:20.972517   38817 main.go:141] libmachine: (ha-174628-m04) Calling .GetState
	I0717 17:38:20.973983   38817 main.go:141] libmachine: Machine "ha-174628-m04" was stopped.
	I0717 17:38:20.974000   38817 stop.go:75] duration metric: took 1.296352992s to stop
	I0717 17:38:20.974028   38817 stop.go:39] StopHost: ha-174628-m03
	I0717 17:38:20.974309   38817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:38:20.974349   38817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:38:20.988995   38817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45375
	I0717 17:38:20.989526   38817 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:38:20.989962   38817 main.go:141] libmachine: Using API Version  1
	I0717 17:38:20.989982   38817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:38:20.990300   38817 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:38:20.992379   38817 out.go:177] * Stopping node "ha-174628-m03"  ...
	I0717 17:38:20.993750   38817 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 17:38:20.993773   38817 main.go:141] libmachine: (ha-174628-m03) Calling .DriverName
	I0717 17:38:20.993967   38817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 17:38:20.993989   38817 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHHostname
	I0717 17:38:20.996774   38817 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:38:20.997190   38817 main.go:141] libmachine: (ha-174628-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:e1:a8", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:32:41 +0000 UTC Type:0 Mac:52:54:00:4c:e1:a8 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-174628-m03 Clientid:01:52:54:00:4c:e1:a8}
	I0717 17:38:20.997239   38817 main.go:141] libmachine: (ha-174628-m03) DBG | domain ha-174628-m03 has defined IP address 192.168.39.187 and MAC address 52:54:00:4c:e1:a8 in network mk-ha-174628
	I0717 17:38:20.997333   38817 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHPort
	I0717 17:38:20.997490   38817 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHKeyPath
	I0717 17:38:20.997664   38817 main.go:141] libmachine: (ha-174628-m03) Calling .GetSSHUsername
	I0717 17:38:20.997798   38817 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m03/id_rsa Username:docker}
	I0717 17:38:21.084134   38817 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 17:38:21.136746   38817 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 17:38:21.191030   38817 main.go:141] libmachine: Stopping "ha-174628-m03"...
	I0717 17:38:21.191061   38817 main.go:141] libmachine: (ha-174628-m03) Calling .GetState
	I0717 17:38:21.192580   38817 main.go:141] libmachine: (ha-174628-m03) Calling .Stop
	I0717 17:38:21.195965   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 0/120
	I0717 17:38:22.197530   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 1/120
	I0717 17:38:23.198936   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 2/120
	I0717 17:38:24.200336   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 3/120
	I0717 17:38:25.202283   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 4/120
	I0717 17:38:26.203882   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 5/120
	I0717 17:38:27.205507   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 6/120
	I0717 17:38:28.207202   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 7/120
	I0717 17:38:29.208507   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 8/120
	I0717 17:38:30.210142   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 9/120
	I0717 17:38:31.211911   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 10/120
	I0717 17:38:32.213256   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 11/120
	I0717 17:38:33.214691   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 12/120
	I0717 17:38:34.216324   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 13/120
	I0717 17:38:35.217919   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 14/120
	I0717 17:38:36.219455   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 15/120
	I0717 17:38:37.220805   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 16/120
	I0717 17:38:38.222357   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 17/120
	I0717 17:38:39.223974   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 18/120
	I0717 17:38:40.225380   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 19/120
	I0717 17:38:41.227112   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 20/120
	I0717 17:38:42.229418   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 21/120
	I0717 17:38:43.230923   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 22/120
	I0717 17:38:44.232533   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 23/120
	I0717 17:38:45.233973   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 24/120
	I0717 17:38:46.235936   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 25/120
	I0717 17:38:47.237470   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 26/120
	I0717 17:38:48.238958   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 27/120
	I0717 17:38:49.240431   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 28/120
	I0717 17:38:50.242046   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 29/120
	I0717 17:38:51.243896   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 30/120
	I0717 17:38:52.245291   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 31/120
	I0717 17:38:53.246709   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 32/120
	I0717 17:38:54.248546   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 33/120
	I0717 17:38:55.250050   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 34/120
	I0717 17:38:56.252263   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 35/120
	I0717 17:38:57.253697   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 36/120
	I0717 17:38:58.254979   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 37/120
	I0717 17:38:59.256154   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 38/120
	I0717 17:39:00.258523   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 39/120
	I0717 17:39:01.260075   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 40/120
	I0717 17:39:02.261393   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 41/120
	I0717 17:39:03.262833   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 42/120
	I0717 17:39:04.264501   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 43/120
	I0717 17:39:05.266068   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 44/120
	I0717 17:39:06.267824   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 45/120
	I0717 17:39:07.269246   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 46/120
	I0717 17:39:08.270542   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 47/120
	I0717 17:39:09.272133   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 48/120
	I0717 17:39:10.273533   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 49/120
	I0717 17:39:11.274907   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 50/120
	I0717 17:39:12.276377   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 51/120
	I0717 17:39:13.277659   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 52/120
	I0717 17:39:14.279649   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 53/120
	I0717 17:39:15.280902   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 54/120
	I0717 17:39:16.282695   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 55/120
	I0717 17:39:17.284004   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 56/120
	I0717 17:39:18.285676   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 57/120
	I0717 17:39:19.286989   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 58/120
	I0717 17:39:20.288195   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 59/120
	I0717 17:39:21.289934   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 60/120
	I0717 17:39:22.291354   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 61/120
	I0717 17:39:23.292736   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 62/120
	I0717 17:39:24.294244   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 63/120
	I0717 17:39:25.295549   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 64/120
	I0717 17:39:26.297923   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 65/120
	I0717 17:39:27.299277   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 66/120
	I0717 17:39:28.300618   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 67/120
	I0717 17:39:29.302014   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 68/120
	I0717 17:39:30.303176   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 69/120
	I0717 17:39:31.305190   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 70/120
	I0717 17:39:32.306221   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 71/120
	I0717 17:39:33.307690   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 72/120
	I0717 17:39:34.309076   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 73/120
	I0717 17:39:35.310481   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 74/120
	I0717 17:39:36.312426   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 75/120
	I0717 17:39:37.313819   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 76/120
	I0717 17:39:38.315637   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 77/120
	I0717 17:39:39.317242   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 78/120
	I0717 17:39:40.318544   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 79/120
	I0717 17:39:41.320164   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 80/120
	I0717 17:39:42.321604   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 81/120
	I0717 17:39:43.323178   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 82/120
	I0717 17:39:44.324501   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 83/120
	I0717 17:39:45.325972   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 84/120
	I0717 17:39:46.328069   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 85/120
	I0717 17:39:47.329357   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 86/120
	I0717 17:39:48.330881   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 87/120
	I0717 17:39:49.332295   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 88/120
	I0717 17:39:50.333784   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 89/120
	I0717 17:39:51.336116   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 90/120
	I0717 17:39:52.337421   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 91/120
	I0717 17:39:53.338922   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 92/120
	I0717 17:39:54.340313   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 93/120
	I0717 17:39:55.341615   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 94/120
	I0717 17:39:56.343426   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 95/120
	I0717 17:39:57.344765   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 96/120
	I0717 17:39:58.346161   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 97/120
	I0717 17:39:59.347631   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 98/120
	I0717 17:40:00.349001   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 99/120
	I0717 17:40:01.350659   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 100/120
	I0717 17:40:02.351934   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 101/120
	I0717 17:40:03.353350   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 102/120
	I0717 17:40:04.354651   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 103/120
	I0717 17:40:05.355964   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 104/120
	I0717 17:40:06.357668   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 105/120
	I0717 17:40:07.358960   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 106/120
	I0717 17:40:08.360568   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 107/120
	I0717 17:40:09.361801   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 108/120
	I0717 17:40:10.363192   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 109/120
	I0717 17:40:11.364970   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 110/120
	I0717 17:40:12.366267   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 111/120
	I0717 17:40:13.367556   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 112/120
	I0717 17:40:14.368895   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 113/120
	I0717 17:40:15.370280   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 114/120
	I0717 17:40:16.371770   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 115/120
	I0717 17:40:17.373390   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 116/120
	I0717 17:40:18.374912   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 117/120
	I0717 17:40:19.376331   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 118/120
	I0717 17:40:20.377828   38817 main.go:141] libmachine: (ha-174628-m03) Waiting for machine to stop 119/120
	I0717 17:40:21.378731   38817 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 17:40:21.378776   38817 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 17:40:21.380672   38817 out.go:177] 
	W0717 17:40:21.382214   38817 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 17:40:21.382230   38817 out.go:239] * 
	* 
	W0717 17:40:21.384384   38817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 17:40:21.385692   38817 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-174628 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-174628 --wait=true -v=7 --alsologtostderr
E0717 17:40:41.791030   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:42:04.836388   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:43:21.395645   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-174628 --wait=true -v=7 --alsologtostderr: (4m8.302149083s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-174628
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174628 -n ha-174628
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174628 logs -n 25: (1.650731102s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m02:/home/docker/cp-test_ha-174628-m03_ha-174628-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m02 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m03_ha-174628-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04:/home/docker/cp-test_ha-174628-m03_ha-174628-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m04 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m03_ha-174628-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-174628 cp testdata/cp-test.txt                                                | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3227756898/001/cp-test_ha-174628-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628:/home/docker/cp-test_ha-174628-m04_ha-174628.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628 sudo cat                                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m04_ha-174628.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m02:/home/docker/cp-test_ha-174628-m04_ha-174628-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m02 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m04_ha-174628-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03:/home/docker/cp-test_ha-174628-m04_ha-174628-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m03 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m04_ha-174628-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-174628 node stop m02 -v=7                                                     | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-174628 node start m02 -v=7                                                    | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-174628 -v=7                                                           | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-174628 -v=7                                                                | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-174628 --wait=true -v=7                                                    | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:40 UTC | 17 Jul 24 17:44 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-174628                                                                | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:44 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 17:40:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 17:40:21.429340   39305 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:40:21.429465   39305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:40:21.429474   39305 out.go:304] Setting ErrFile to fd 2...
	I0717 17:40:21.429479   39305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:40:21.429657   39305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:40:21.430186   39305 out.go:298] Setting JSON to false
	I0717 17:40:21.431115   39305 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4964,"bootTime":1721233057,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 17:40:21.431167   39305 start.go:139] virtualization: kvm guest
	I0717 17:40:21.433582   39305 out.go:177] * [ha-174628] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 17:40:21.434961   39305 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 17:40:21.435001   39305 notify.go:220] Checking for updates...
	I0717 17:40:21.437384   39305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 17:40:21.438638   39305 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:40:21.440054   39305 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:40:21.441347   39305 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 17:40:21.442519   39305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 17:40:21.444142   39305 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:40:21.444217   39305 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 17:40:21.444672   39305 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:40:21.444731   39305 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:40:21.459431   39305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0717 17:40:21.459766   39305 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:40:21.460267   39305 main.go:141] libmachine: Using API Version  1
	I0717 17:40:21.460288   39305 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:40:21.460689   39305 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:40:21.460858   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:40:21.497053   39305 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 17:40:21.498314   39305 start.go:297] selected driver: kvm2
	I0717 17:40:21.498337   39305 start.go:901] validating driver "kvm2" against &{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:40:21.498517   39305 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 17:40:21.498973   39305 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:40:21.499075   39305 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 17:40:21.513489   39305 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 17:40:21.514152   39305 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 17:40:21.514219   39305 cni.go:84] Creating CNI manager for ""
	I0717 17:40:21.514232   39305 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 17:40:21.514286   39305 start.go:340] cluster config:
	{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:40:21.514407   39305 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:40:21.516121   39305 out.go:177] * Starting "ha-174628" primary control-plane node in "ha-174628" cluster
	I0717 17:40:21.517394   39305 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:40:21.517428   39305 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 17:40:21.517437   39305 cache.go:56] Caching tarball of preloaded images
	I0717 17:40:21.517503   39305 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 17:40:21.517513   39305 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 17:40:21.517633   39305 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:40:21.517835   39305 start.go:360] acquireMachinesLock for ha-174628: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 17:40:21.517902   39305 start.go:364] duration metric: took 42.256µs to acquireMachinesLock for "ha-174628"
	I0717 17:40:21.517922   39305 start.go:96] Skipping create...Using existing machine configuration
	I0717 17:40:21.517928   39305 fix.go:54] fixHost starting: 
	I0717 17:40:21.518260   39305 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:40:21.518297   39305 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:40:21.531581   39305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39671
	I0717 17:40:21.532034   39305 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:40:21.532567   39305 main.go:141] libmachine: Using API Version  1
	I0717 17:40:21.532593   39305 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:40:21.532881   39305 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:40:21.533066   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:40:21.533236   39305 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:40:21.534638   39305 fix.go:112] recreateIfNeeded on ha-174628: state=Running err=<nil>
	W0717 17:40:21.534668   39305 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 17:40:21.536385   39305 out.go:177] * Updating the running kvm2 "ha-174628" VM ...
	I0717 17:40:21.537760   39305 machine.go:94] provisionDockerMachine start ...
	I0717 17:40:21.537781   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:40:21.537956   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:40:21.540080   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.540565   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:21.540598   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.540766   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:40:21.540965   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:21.541108   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:21.541228   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:40:21.541400   39305 main.go:141] libmachine: Using SSH client type: native
	I0717 17:40:21.541583   39305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:40:21.541594   39305 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 17:40:21.641830   39305 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174628
	
	I0717 17:40:21.641858   39305 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:40:21.642092   39305 buildroot.go:166] provisioning hostname "ha-174628"
	I0717 17:40:21.642113   39305 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:40:21.642277   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:40:21.644725   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.645135   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:21.645160   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.645310   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:40:21.645495   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:21.645651   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:21.645820   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:40:21.645961   39305 main.go:141] libmachine: Using SSH client type: native
	I0717 17:40:21.646118   39305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:40:21.646129   39305 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174628 && echo "ha-174628" | sudo tee /etc/hostname
	I0717 17:40:21.759554   39305 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174628
	
	I0717 17:40:21.759599   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:40:21.762189   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.762597   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:21.762634   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.762779   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:40:21.762965   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:21.763131   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:21.763238   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:40:21.763408   39305 main.go:141] libmachine: Using SSH client type: native
	I0717 17:40:21.763614   39305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:40:21.763636   39305 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174628/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 17:40:21.865553   39305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:40:21.865575   39305 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 17:40:21.865597   39305 buildroot.go:174] setting up certificates
	I0717 17:40:21.865606   39305 provision.go:84] configureAuth start
	I0717 17:40:21.865615   39305 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:40:21.865866   39305 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:40:21.868270   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.868676   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:21.868703   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.868853   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:40:21.870893   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.871213   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:21.871237   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.871345   39305 provision.go:143] copyHostCerts
	I0717 17:40:21.871390   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:40:21.871424   39305 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 17:40:21.871435   39305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:40:21.871501   39305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 17:40:21.871617   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:40:21.871644   39305 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 17:40:21.871651   39305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:40:21.871677   39305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 17:40:21.871720   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:40:21.871735   39305 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 17:40:21.871741   39305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:40:21.871763   39305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 17:40:21.871826   39305 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.ha-174628 san=[127.0.0.1 192.168.39.100 ha-174628 localhost minikube]
	I0717 17:40:22.013479   39305 provision.go:177] copyRemoteCerts
	I0717 17:40:22.013558   39305 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 17:40:22.013592   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:40:22.016141   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:22.016519   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:22.016553   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:22.016784   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:40:22.016989   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:22.017131   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:40:22.017278   39305 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:40:22.095020   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 17:40:22.095089   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0717 17:40:22.123795   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 17:40:22.123899   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 17:40:22.147207   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 17:40:22.147292   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 17:40:22.171062   39305 provision.go:87] duration metric: took 305.44263ms to configureAuth
	I0717 17:40:22.171093   39305 buildroot.go:189] setting minikube options for container-runtime
	I0717 17:40:22.171319   39305 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:40:22.171389   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:40:22.173692   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:22.174035   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:22.174065   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:22.174186   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:40:22.174413   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:22.174597   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:22.174734   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:40:22.174907   39305 main.go:141] libmachine: Using SSH client type: native
	I0717 17:40:22.175058   39305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:40:22.175072   39305 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 17:41:52.904070   39305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 17:41:52.904094   39305 machine.go:97] duration metric: took 1m31.366318438s to provisionDockerMachine
	I0717 17:41:52.904107   39305 start.go:293] postStartSetup for "ha-174628" (driver="kvm2")
	I0717 17:41:52.904132   39305 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 17:41:52.904150   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:41:52.904476   39305 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 17:41:52.904505   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:41:52.907417   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:52.907881   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:52.907905   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:52.908066   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:41:52.908249   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:41:52.908411   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:41:52.908536   39305 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:41:52.987942   39305 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 17:41:52.991746   39305 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 17:41:52.991764   39305 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 17:41:52.991823   39305 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 17:41:52.991911   39305 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 17:41:52.991922   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /etc/ssl/certs/215772.pem
	I0717 17:41:52.992019   39305 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 17:41:53.000606   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:41:53.022706   39305 start.go:296] duration metric: took 118.585939ms for postStartSetup
	I0717 17:41:53.022751   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:41:53.023101   39305 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 17:41:53.023153   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:41:53.025805   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.026253   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:53.026273   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.026498   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:41:53.026709   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:41:53.026901   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:41:53.027095   39305 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	W0717 17:41:53.106580   39305 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0717 17:41:53.106603   39305 fix.go:56] duration metric: took 1m31.588674359s for fixHost
	I0717 17:41:53.106638   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:41:53.109514   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.109929   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:53.109954   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.110097   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:41:53.110293   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:41:53.110511   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:41:53.110702   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:41:53.110922   39305 main.go:141] libmachine: Using SSH client type: native
	I0717 17:41:53.111128   39305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:41:53.111143   39305 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 17:41:53.209302   39305 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721238113.163965415
	
	I0717 17:41:53.209329   39305 fix.go:216] guest clock: 1721238113.163965415
	I0717 17:41:53.209335   39305 fix.go:229] Guest: 2024-07-17 17:41:53.163965415 +0000 UTC Remote: 2024-07-17 17:41:53.106614193 +0000 UTC m=+91.711299656 (delta=57.351222ms)
	I0717 17:41:53.209354   39305 fix.go:200] guest clock delta is within tolerance: 57.351222ms
	I0717 17:41:53.209360   39305 start.go:83] releasing machines lock for "ha-174628", held for 1m31.691444595s
	I0717 17:41:53.209383   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:41:53.209625   39305 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:41:53.212614   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.213001   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:53.213030   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.213134   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:41:53.213625   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:41:53.213783   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:41:53.213874   39305 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 17:41:53.213909   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:41:53.214001   39305 ssh_runner.go:195] Run: cat /version.json
	I0717 17:41:53.214030   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:41:53.216630   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.216964   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.217008   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:53.217048   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.217167   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:41:53.217365   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:41:53.217517   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:53.217544   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:41:53.217558   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.217699   39305 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:41:53.217786   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:41:53.217948   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:41:53.218092   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:41:53.218249   39305 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:41:53.325810   39305 ssh_runner.go:195] Run: systemctl --version
	I0717 17:41:53.331869   39305 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 17:41:53.494531   39305 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 17:41:53.500010   39305 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 17:41:53.500065   39305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 17:41:53.508712   39305 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 17:41:53.508731   39305 start.go:495] detecting cgroup driver to use...
	I0717 17:41:53.508787   39305 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 17:41:53.525080   39305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 17:41:53.537720   39305 docker.go:217] disabling cri-docker service (if available) ...
	I0717 17:41:53.537772   39305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 17:41:53.550848   39305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 17:41:53.563389   39305 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 17:41:53.701502   39305 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 17:41:53.844205   39305 docker.go:233] disabling docker service ...
	I0717 17:41:53.844277   39305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 17:41:53.860500   39305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 17:41:53.873397   39305 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 17:41:54.018369   39305 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 17:41:54.183207   39305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 17:41:54.196687   39305 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 17:41:54.214724   39305 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 17:41:54.214799   39305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.225224   39305 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 17:41:54.225294   39305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.236487   39305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.246685   39305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.256623   39305 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 17:41:54.266198   39305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.275446   39305 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.285306   39305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.294644   39305 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 17:41:54.303183   39305 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 17:41:54.311581   39305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:41:54.445238   39305 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 17:41:54.700492   39305 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 17:41:54.700562   39305 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 17:41:54.705170   39305 start.go:563] Will wait 60s for crictl version
	I0717 17:41:54.705213   39305 ssh_runner.go:195] Run: which crictl
	I0717 17:41:54.708468   39305 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 17:41:54.750802   39305 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 17:41:54.750910   39305 ssh_runner.go:195] Run: crio --version
	I0717 17:41:54.782994   39305 ssh_runner.go:195] Run: crio --version
	I0717 17:41:54.811002   39305 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 17:41:54.812104   39305 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:41:54.814403   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:54.814785   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:54.814819   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:54.815025   39305 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 17:41:54.819415   39305 kubeadm.go:883] updating cluster {Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 17:41:54.819550   39305 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:41:54.819601   39305 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 17:41:54.861708   39305 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 17:41:54.861730   39305 crio.go:433] Images already preloaded, skipping extraction
	I0717 17:41:54.861787   39305 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 17:41:54.895148   39305 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 17:41:54.895190   39305 cache_images.go:84] Images are preloaded, skipping loading
	I0717 17:41:54.895202   39305 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.30.2 crio true true} ...
	I0717 17:41:54.895464   39305 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 17:41:54.895809   39305 ssh_runner.go:195] Run: crio config
	I0717 17:41:54.949147   39305 cni.go:84] Creating CNI manager for ""
	I0717 17:41:54.949164   39305 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 17:41:54.949176   39305 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 17:41:54.949201   39305 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174628 NodeName:ha-174628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 17:41:54.949356   39305 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174628"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 17:41:54.949383   39305 kube-vip.go:115] generating kube-vip config ...
	I0717 17:41:54.949424   39305 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 17:41:54.960586   39305 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 17:41:54.960680   39305 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 17:41:54.960739   39305 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 17:41:54.969577   39305 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 17:41:54.969629   39305 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 17:41:54.978318   39305 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 17:41:54.993669   39305 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 17:41:55.008958   39305 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 17:41:55.023889   39305 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 17:41:55.039005   39305 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 17:41:55.043213   39305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:41:55.183937   39305 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:41:55.198271   39305 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628 for IP: 192.168.39.100
	I0717 17:41:55.198299   39305 certs.go:194] generating shared ca certs ...
	I0717 17:41:55.198327   39305 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:41:55.198478   39305 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 17:41:55.198522   39305 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 17:41:55.198532   39305 certs.go:256] generating profile certs ...
	I0717 17:41:55.198607   39305 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key
	I0717 17:41:55.198633   39305 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.df14862d
	I0717 17:41:55.198647   39305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.df14862d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.97 192.168.39.187 192.168.39.254]
	I0717 17:41:55.296660   39305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.df14862d ...
	I0717 17:41:55.296688   39305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.df14862d: {Name:mkec4f7fab86bbcc849b125ea863b5b4331e7f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:41:55.296845   39305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.df14862d ...
	I0717 17:41:55.296856   39305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.df14862d: {Name:mkea4da757864a30889a26df0dd583fc93fc2fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:41:55.296922   39305 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.df14862d -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt
	I0717 17:41:55.297081   39305 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.df14862d -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key
	I0717 17:41:55.297201   39305 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key
	I0717 17:41:55.297215   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 17:41:55.297226   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 17:41:55.297237   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 17:41:55.297249   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 17:41:55.297259   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 17:41:55.297271   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 17:41:55.297296   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 17:41:55.297322   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 17:41:55.297372   39305 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 17:41:55.297400   39305 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 17:41:55.297409   39305 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 17:41:55.297429   39305 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 17:41:55.297449   39305 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 17:41:55.297471   39305 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 17:41:55.297505   39305 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:41:55.297531   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /usr/share/ca-certificates/215772.pem
	I0717 17:41:55.297547   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:41:55.297558   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem -> /usr/share/ca-certificates/21577.pem
	I0717 17:41:55.298114   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 17:41:55.322608   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 17:41:55.344511   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 17:41:55.366063   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 17:41:55.387020   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 17:41:55.409022   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 17:41:55.430672   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 17:41:55.452898   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 17:41:55.474640   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 17:41:55.495749   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 17:41:55.516526   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 17:41:55.537201   39305 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 17:41:55.552230   39305 ssh_runner.go:195] Run: openssl version
	I0717 17:41:55.558056   39305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 17:41:55.567987   39305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 17:41:55.572108   39305 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 17:41:55.572166   39305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 17:41:55.577361   39305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 17:41:55.585885   39305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 17:41:55.595609   39305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 17:41:55.599479   39305 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 17:41:55.599523   39305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 17:41:55.604587   39305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 17:41:55.612961   39305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 17:41:55.622461   39305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:41:55.626496   39305 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:41:55.626541   39305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:41:55.631484   39305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 17:41:55.639755   39305 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 17:41:55.643806   39305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 17:41:55.648975   39305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 17:41:55.654328   39305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 17:41:55.659522   39305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 17:41:55.664323   39305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 17:41:55.669298   39305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 17:41:55.674284   39305 kubeadm.go:392] StartCluster: {Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:41:55.674413   39305 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 17:41:55.674452   39305 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 17:41:55.709330   39305 cri.go:89] found id: "bac1830ddc0ce3bb283dc0ff8ea48a22f58663f35dc0d244d9f38455c1a0d26d"
	I0717 17:41:55.709354   39305 cri.go:89] found id: "f57484a0f36cb0e3be2259b95fa649943aa4e1a3dc1cf2e88fbd1e4aae633a65"
	I0717 17:41:55.709360   39305 cri.go:89] found id: "4c2a82d5779c30132aa024c001d6b11525959eaf1e17d978f6a60cf60c14ea2e"
	I0717 17:41:55.709365   39305 cri.go:89] found id: "e8e4922ea1eac7b61df3c5c3284c361f60b0cbb9299b480529b43872a061b780"
	I0717 17:41:55.709369   39305 cri.go:89] found id: "976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9"
	I0717 17:41:55.709373   39305 cri.go:89] found id: "97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb"
	I0717 17:41:55.709379   39305 cri.go:89] found id: "2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0"
	I0717 17:41:55.709383   39305 cri.go:89] found id: "d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78"
	I0717 17:41:55.709387   39305 cri.go:89] found id: "370441d5e9e25be3ceff0e96f53875a159099004aa797d2570be4e3e61aa9e59"
	I0717 17:41:55.709393   39305 cri.go:89] found id: "e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147"
	I0717 17:41:55.709409   39305 cri.go:89] found id: "889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9"
	I0717 17:41:55.709416   39305 cri.go:89] found id: "9880796029aa2ee7897660b3ccd40a039526e26c4b0208d087876a8ed4a6e3dd"
	I0717 17:41:55.709421   39305 cri.go:89] found id: "dbb0842f9354fc3963cae2902decd174b028cb857227fdf23844b2da6a7c01ac"
	I0717 17:41:55.709428   39305 cri.go:89] found id: ""
	I0717 17:41:55.709477   39305 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.358154919Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:94b647eeb1369e4493dc85bcf955b2befed32d2521170c361a8a9b5399948e6e,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-8zv26,Uid:fe9c4738-6334-4fc5-b8a3-dc249512fa0a,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721238154312424674,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T17:33:44.738121805Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c88634d156297e8cffe86a0c16661b1b84d415c5d7d9c9dd1752434ff17dc477,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-174628,Uid:3c6ba04a85bfbff5a957b7732c295eff,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1721238133322250794,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6ba04a85bfbff5a957b7732c295eff,},Annotations:map[string]string{kubernetes.io/config.hash: 3c6ba04a85bfbff5a957b7732c295eff,kubernetes.io/config.seen: 2024-07-17T17:41:54.994537038Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ce3f0eae7af4b8c50f907a2439475bdaebec3b8543d2009b15a632a25fbfd3c3,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nb567,Uid:1739ac64-be05-4438-9a8f-a0d2821a1650,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721238120669752507,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-17T17:30:21.437343453Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a8eeb7452b65fe4a38b12f3f38590d258f7d09b65da4fb816c76b123352d2531,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-ljjl7,Uid:2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721238120656497372,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T17:30:21.445292167Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-174628,Uid:de7a365d4a82da636f5e615f6e397e41,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721238120628180635,Labels:map[string]strin
g{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.100:8443,kubernetes.io/config.hash: de7a365d4a82da636f5e615f6e397e41,kubernetes.io/config.seen: 2024-07-17T17:29:53.149584367Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-174628,Uid:fc801341b913ca6bb6e3fd73c9182232,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721238120623315101,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd
73c9182232,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fc801341b913ca6bb6e3fd73c9182232,kubernetes.io/config.seen: 2024-07-17T17:29:53.149585170Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:11ad8e8d3095fed789474425b90c741241c2a47734b7d6d0ba5816f7742455e5,Metadata:&PodSandboxMetadata{Name:etcd-ha-174628,Uid:eb8260866404ea84b14c26f81effc219,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721238120622925475,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.100:2379,kubernetes.io/config.hash: eb8260866404ea84b14c26f81effc219,kubernetes.io/config.seen: 2024-07-17T17:29:53.149583312Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3662e32676f
84a5c133443791f0a3a0f8f72902b220c453ec8112f3d3cd1d292,Metadata:&PodSandboxMetadata{Name:kube-proxy-fqf9q,Uid:f74d57a9-38a2-464d-991f-fc8905fdbe3f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721238120613188432,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T17:30:05.308557814Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eaf3b42369089d6ae0fa4f237fb70670b8c932dbb0828d6a377455465781939c,Metadata:&PodSandboxMetadata{Name:kindnet-k6jnp,Uid:9bca93ed-aca5-4540-990c-d9e6209d12d0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721238120600513574,Labels:map[string]string{app: kindnet,controller-revision-hash: 545f566499,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T17:30:05.312003339Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:402fa28dbe97771bbb2f28fb97e12f7a31e3495a304ab85f4caefa22e16d24e5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-174628,Uid:57815d244795c90550b97bbf781e6e77,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721238120592867534,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 57815d244795c90550b97bbf781e6e77,kubernetes.io/config.seen: 2024-07-17T17:29:53.149585854Z,kubernetes.io/config.source: f
ile,},RuntimeHandler:,},&PodSandbox{Id:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8c0601bb-36f6-434d-8e9d-1e326bf682f5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721238120575100431,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imag
ePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T17:30:21.447877127Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-8zv26,Uid:fe9c4738-6334-4fc5-b8a3-dc249512fa0a,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721237625054874841,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T17:33:44.738121805Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-ljjl7,Uid:2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721237421773784404,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T17:30:21.445292167Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nb567,Uid:1739ac64-be05-4438-9a8f-a0d2821a1650,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721237421744503154,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T17:30:21.437343453Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&PodSandboxMetadata{Name:kindnet-k6jnp,Uid:9bca93ed-aca5-4540-990c-d9e6209d12d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721237406232647231,Labels:map[string]string{app: kindnet,controller-revision-hash: 545f566499,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T17:30:05.312003339Z,kubernetes.io/config.source: api,},Runt
imeHandler:,},&PodSandbox{Id:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&PodSandboxMetadata{Name:kube-proxy-fqf9q,Uid:f74d57a9-38a2-464d-991f-fc8905fdbe3f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721237406227979955,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T17:30:05.308557814Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-174628,Uid:57815d244795c90550b97bbf781e6e77,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721237386786168538,Labels:map[string]string{component: kube-scheduler,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 57815d244795c90550b97bbf781e6e77,kubernetes.io/config.seen: 2024-07-17T17:29:46.328161972Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&PodSandboxMetadata{Name:etcd-ha-174628,Uid:eb8260866404ea84b14c26f81effc219,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721237386780883399,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.100:2379,kubernetes.io/config.hash: eb826086
6404ea84b14c26f81effc219,kubernetes.io/config.seen: 2024-07-17T17:29:46.328158752Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6e83a429-e2d0-4487-8bb2-b45677091ece name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.358964129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc795a9e-7998-4c17-b971-a70703f559d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.359015365Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc795a9e-7998-4c17-b971-a70703f559d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.359402850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6c478540f2d235d413baa4d4b1eb115afe319327ba78736a44196ea41de7ad9,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721238169205488492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c844fa26b05ab402b5550aaf261619fd0941934823e012a82a8ef73c185a6f5a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721238164205844831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731dfd6c523fb7b9024ae6d32cdb21435f506403f647995fd463c05da6ca3883,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721238161221363556,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa1c2154a9a4fe39461bed1d21fb6a362b654583e5f79e01e7f0c3c1391993,PodSandboxId:94b647eeb1369e4493dc85bcf955b2befed32d2521170c361a8a9b5399948e6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721238154435466676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1536d680fdb6c0b0a97dfde782f97e1b635bb7ba734b54c90f5a637e6121b403,PodSandboxId:c88634d156297e8cffe86a0c16661b1b84d415c5d7d9c9dd1752434ff17dc477,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721238133419777863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6ba04a85bfbff5a957b7732c295eff,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1cce1c506e03a9c4fbe0f8d38792493de36070eb5cdd03a5cedf085c157a6d,PodSandboxId:3662e32676f84a5c133443791f0a3a0f8f72902b220c453ec8112f3d3cd1d292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721238121368195649,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:52d1cab66f48cbd8674b8f411ab80487389fb12b4710edc84607d1ef666b676a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721238121194388588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:cb8f9753c4a91113f4d19fb976afdc57ba90f879488ae102acb94522b4753834,PodSandboxId:ce3f0eae7af4b8c50f907a2439475bdaebec3b8543d2009b15a632a25fbfd3c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121266226656,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f83ba4f05083b01b85083344f1ceece3524c9ed469106ad62b56da508a5126d,PodSandboxId:a8eeb7452b65fe4a38b12f3f38590d258f7d09b65da4fb816c76b123352d2531,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121201747500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a32601104ebf78674d54d79144e835165efe3710687b6a61a6e11009905acd,PodSandboxId:11ad8e8d3095fed789474425b90c741241c2a47734b7d6d0ba5816f7742455e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721238121252764480,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aeb0a6d29b3db9397dbbe275b13b7c97ca27bb9e4805af79da925fbad61b1af,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721238121120781163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967342f385c9ab30f017f6226ebc0dd6e6f535d7abf22a5884c63765726387b1,PodSandboxId:eaf3b42369089d6ae0fa4f237fb70670b8c932dbb0828d6a377455465781939c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721238121047011528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-a
ca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0e4ff14e576853890e93a9a6a937dd70d94bc7822374634f11e460ae6b3749,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721238120916030757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d
-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0606a3ecaf44ed6f342c1b254dc982cccb31c0296258c76f4f5f18927216ea47,PodSandboxId:402fa28dbe97771bbb2f28fb97e12f7a31e3495a304ab85f4caefa22e16d24e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721238120999347906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annot
ations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721237628009986530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421982456232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kuberne
tes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421928772505,Labels:map[string]string{io.kubernetes.container.name: coredn
s,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721237410216101906,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721237406540064620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721237387075455569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721237387046822608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc795a9e-7998-4c17-b971-a70703f559d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.395174395Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3dfb3ba-b68b-45cc-9c5a-5f3e5ce3328c name=/runtime.v1.RuntimeService/Version
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.395257896Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3dfb3ba-b68b-45cc-9c5a-5f3e5ce3328c name=/runtime.v1.RuntimeService/Version
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.396186850Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7474d95-ad73-4d5c-ba7c-1ba14ce6080c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.396640108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721238270396615585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7474d95-ad73-4d5c-ba7c-1ba14ce6080c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.397269732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2d64cc1-6fa1-4e38-ab52-f7afe3d0cab6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.397325587Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2d64cc1-6fa1-4e38-ab52-f7afe3d0cab6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.397921601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6c478540f2d235d413baa4d4b1eb115afe319327ba78736a44196ea41de7ad9,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721238169205488492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c844fa26b05ab402b5550aaf261619fd0941934823e012a82a8ef73c185a6f5a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721238164205844831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731dfd6c523fb7b9024ae6d32cdb21435f506403f647995fd463c05da6ca3883,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721238161221363556,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa1c2154a9a4fe39461bed1d21fb6a362b654583e5f79e01e7f0c3c1391993,PodSandboxId:94b647eeb1369e4493dc85bcf955b2befed32d2521170c361a8a9b5399948e6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721238154435466676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1536d680fdb6c0b0a97dfde782f97e1b635bb7ba734b54c90f5a637e6121b403,PodSandboxId:c88634d156297e8cffe86a0c16661b1b84d415c5d7d9c9dd1752434ff17dc477,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721238133419777863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6ba04a85bfbff5a957b7732c295eff,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1cce1c506e03a9c4fbe0f8d38792493de36070eb5cdd03a5cedf085c157a6d,PodSandboxId:3662e32676f84a5c133443791f0a3a0f8f72902b220c453ec8112f3d3cd1d292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721238121368195649,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:52d1cab66f48cbd8674b8f411ab80487389fb12b4710edc84607d1ef666b676a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721238121194388588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:cb8f9753c4a91113f4d19fb976afdc57ba90f879488ae102acb94522b4753834,PodSandboxId:ce3f0eae7af4b8c50f907a2439475bdaebec3b8543d2009b15a632a25fbfd3c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121266226656,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f83ba4f05083b01b85083344f1ceece3524c9ed469106ad62b56da508a5126d,PodSandboxId:a8eeb7452b65fe4a38b12f3f38590d258f7d09b65da4fb816c76b123352d2531,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121201747500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a32601104ebf78674d54d79144e835165efe3710687b6a61a6e11009905acd,PodSandboxId:11ad8e8d3095fed789474425b90c741241c2a47734b7d6d0ba5816f7742455e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721238121252764480,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aeb0a6d29b3db9397dbbe275b13b7c97ca27bb9e4805af79da925fbad61b1af,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721238121120781163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967342f385c9ab30f017f6226ebc0dd6e6f535d7abf22a5884c63765726387b1,PodSandboxId:eaf3b42369089d6ae0fa4f237fb70670b8c932dbb0828d6a377455465781939c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721238121047011528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-a
ca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0e4ff14e576853890e93a9a6a937dd70d94bc7822374634f11e460ae6b3749,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721238120916030757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d
-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0606a3ecaf44ed6f342c1b254dc982cccb31c0296258c76f4f5f18927216ea47,PodSandboxId:402fa28dbe97771bbb2f28fb97e12f7a31e3495a304ab85f4caefa22e16d24e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721238120999347906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annot
ations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721237628009986530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421982456232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kuberne
tes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421928772505,Labels:map[string]string{io.kubernetes.container.name: coredn
s,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721237410216101906,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721237406540064620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721237387075455569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721237387046822608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2d64cc1-6fa1-4e38-ab52-f7afe3d0cab6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.437548676Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd1a14d8-e988-438a-84f8-5c9603d05d32 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.437619534Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd1a14d8-e988-438a-84f8-5c9603d05d32 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.438933748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=33efd7fb-cae7-4873-b403-fb5ec83e65f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.439368072Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721238270439346903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=33efd7fb-cae7-4873-b403-fb5ec83e65f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.445593427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=668555df-d888-4255-a647-6254ab966a44 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.446142963Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=668555df-d888-4255-a647-6254ab966a44 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.447454181Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6c478540f2d235d413baa4d4b1eb115afe319327ba78736a44196ea41de7ad9,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721238169205488492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c844fa26b05ab402b5550aaf261619fd0941934823e012a82a8ef73c185a6f5a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721238164205844831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731dfd6c523fb7b9024ae6d32cdb21435f506403f647995fd463c05da6ca3883,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721238161221363556,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa1c2154a9a4fe39461bed1d21fb6a362b654583e5f79e01e7f0c3c1391993,PodSandboxId:94b647eeb1369e4493dc85bcf955b2befed32d2521170c361a8a9b5399948e6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721238154435466676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1536d680fdb6c0b0a97dfde782f97e1b635bb7ba734b54c90f5a637e6121b403,PodSandboxId:c88634d156297e8cffe86a0c16661b1b84d415c5d7d9c9dd1752434ff17dc477,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721238133419777863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6ba04a85bfbff5a957b7732c295eff,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1cce1c506e03a9c4fbe0f8d38792493de36070eb5cdd03a5cedf085c157a6d,PodSandboxId:3662e32676f84a5c133443791f0a3a0f8f72902b220c453ec8112f3d3cd1d292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721238121368195649,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:52d1cab66f48cbd8674b8f411ab80487389fb12b4710edc84607d1ef666b676a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721238121194388588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:cb8f9753c4a91113f4d19fb976afdc57ba90f879488ae102acb94522b4753834,PodSandboxId:ce3f0eae7af4b8c50f907a2439475bdaebec3b8543d2009b15a632a25fbfd3c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121266226656,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f83ba4f05083b01b85083344f1ceece3524c9ed469106ad62b56da508a5126d,PodSandboxId:a8eeb7452b65fe4a38b12f3f38590d258f7d09b65da4fb816c76b123352d2531,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121201747500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a32601104ebf78674d54d79144e835165efe3710687b6a61a6e11009905acd,PodSandboxId:11ad8e8d3095fed789474425b90c741241c2a47734b7d6d0ba5816f7742455e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721238121252764480,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aeb0a6d29b3db9397dbbe275b13b7c97ca27bb9e4805af79da925fbad61b1af,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721238121120781163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967342f385c9ab30f017f6226ebc0dd6e6f535d7abf22a5884c63765726387b1,PodSandboxId:eaf3b42369089d6ae0fa4f237fb70670b8c932dbb0828d6a377455465781939c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721238121047011528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-a
ca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0e4ff14e576853890e93a9a6a937dd70d94bc7822374634f11e460ae6b3749,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721238120916030757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d
-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0606a3ecaf44ed6f342c1b254dc982cccb31c0296258c76f4f5f18927216ea47,PodSandboxId:402fa28dbe97771bbb2f28fb97e12f7a31e3495a304ab85f4caefa22e16d24e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721238120999347906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annot
ations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721237628009986530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421982456232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kuberne
tes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421928772505,Labels:map[string]string{io.kubernetes.container.name: coredn
s,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721237410216101906,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721237406540064620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721237387075455569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721237387046822608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=668555df-d888-4255-a647-6254ab966a44 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.493835471Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d91ea02e-4591-4404-b661-d83a2994f8ab name=/runtime.v1.RuntimeService/Version
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.493934342Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d91ea02e-4591-4404-b661-d83a2994f8ab name=/runtime.v1.RuntimeService/Version
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.495151724Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa23b1ce-6b30-45b4-a1d8-ab610036feab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.495604609Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721238270495582409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa23b1ce-6b30-45b4-a1d8-ab610036feab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.496264949Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=559e9f4c-7ced-44aa-a136-2f5c7256c753 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.496336368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=559e9f4c-7ced-44aa-a136-2f5c7256c753 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:44:30 ha-174628 crio[3765]: time="2024-07-17 17:44:30.496888353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6c478540f2d235d413baa4d4b1eb115afe319327ba78736a44196ea41de7ad9,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721238169205488492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c844fa26b05ab402b5550aaf261619fd0941934823e012a82a8ef73c185a6f5a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721238164205844831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731dfd6c523fb7b9024ae6d32cdb21435f506403f647995fd463c05da6ca3883,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721238161221363556,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa1c2154a9a4fe39461bed1d21fb6a362b654583e5f79e01e7f0c3c1391993,PodSandboxId:94b647eeb1369e4493dc85bcf955b2befed32d2521170c361a8a9b5399948e6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721238154435466676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1536d680fdb6c0b0a97dfde782f97e1b635bb7ba734b54c90f5a637e6121b403,PodSandboxId:c88634d156297e8cffe86a0c16661b1b84d415c5d7d9c9dd1752434ff17dc477,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721238133419777863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6ba04a85bfbff5a957b7732c295eff,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1cce1c506e03a9c4fbe0f8d38792493de36070eb5cdd03a5cedf085c157a6d,PodSandboxId:3662e32676f84a5c133443791f0a3a0f8f72902b220c453ec8112f3d3cd1d292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721238121368195649,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:52d1cab66f48cbd8674b8f411ab80487389fb12b4710edc84607d1ef666b676a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721238121194388588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:cb8f9753c4a91113f4d19fb976afdc57ba90f879488ae102acb94522b4753834,PodSandboxId:ce3f0eae7af4b8c50f907a2439475bdaebec3b8543d2009b15a632a25fbfd3c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121266226656,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f83ba4f05083b01b85083344f1ceece3524c9ed469106ad62b56da508a5126d,PodSandboxId:a8eeb7452b65fe4a38b12f3f38590d258f7d09b65da4fb816c76b123352d2531,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121201747500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a32601104ebf78674d54d79144e835165efe3710687b6a61a6e11009905acd,PodSandboxId:11ad8e8d3095fed789474425b90c741241c2a47734b7d6d0ba5816f7742455e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721238121252764480,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aeb0a6d29b3db9397dbbe275b13b7c97ca27bb9e4805af79da925fbad61b1af,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721238121120781163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967342f385c9ab30f017f6226ebc0dd6e6f535d7abf22a5884c63765726387b1,PodSandboxId:eaf3b42369089d6ae0fa4f237fb70670b8c932dbb0828d6a377455465781939c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721238121047011528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-a
ca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0e4ff14e576853890e93a9a6a937dd70d94bc7822374634f11e460ae6b3749,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721238120916030757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d
-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0606a3ecaf44ed6f342c1b254dc982cccb31c0296258c76f4f5f18927216ea47,PodSandboxId:402fa28dbe97771bbb2f28fb97e12f7a31e3495a304ab85f4caefa22e16d24e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721238120999347906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annot
ations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721237628009986530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421982456232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kuberne
tes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421928772505,Labels:map[string]string{io.kubernetes.container.name: coredn
s,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721237410216101906,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721237406540064620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721237387075455569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721237387046822608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=559e9f4c-7ced-44aa-a136-2f5c7256c753 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b6c478540f2d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   716f5357f3d2e       storage-provisioner
	c844fa26b05ab       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      About a minute ago   Running             kube-controller-manager   2                   af17cf9fe4c44       kube-controller-manager-ha-174628
	731dfd6c523fb       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Running             kube-apiserver            3                   69b8050f37d85       kube-apiserver-ha-174628
	19fa1c2154a9a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   94b647eeb1369       busybox-fc5497c4f-8zv26
	1536d680fdb6c       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   c88634d156297       kube-vip-ha-174628
	ef1cce1c506e0       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      2 minutes ago        Running             kube-proxy                1                   3662e32676f84       kube-proxy-fqf9q
	cb8f9753c4a91       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   ce3f0eae7af4b       coredns-7db6d8ff4d-nb567
	06a32601104eb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   11ad8e8d3095f       etcd-ha-174628
	6f83ba4f05083       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   a8eeb7452b65f       coredns-7db6d8ff4d-ljjl7
	52d1cab66f48c       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      2 minutes ago        Exited              kube-controller-manager   1                   af17cf9fe4c44       kube-controller-manager-ha-174628
	5aeb0a6d29b3d       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      2 minutes ago        Exited              kube-apiserver            2                   69b8050f37d85       kube-apiserver-ha-174628
	967342f385c9a       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      2 minutes ago        Running             kindnet-cni               1                   eaf3b42369089       kindnet-k6jnp
	0606a3ecaf44e       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      2 minutes ago        Running             kube-scheduler            1                   402fa28dbe977       kube-scheduler-ha-174628
	6d0e4ff14e576       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   716f5357f3d2e       storage-provisioner
	88ba3b0cb3105       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   c4d7c5b8a369b       busybox-fc5497c4f-8zv26
	976aeedd4a51e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   6732d32de6a25       coredns-7db6d8ff4d-ljjl7
	97987539971dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   9ca7e3b66f8e6       coredns-7db6d8ff4d-nb567
	2fefa59bf46cd       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    14 minutes ago       Exited              kindnet-cni               0                   db21995c3cb31       kindnet-k6jnp
	d139046cefa3a       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      14 minutes ago       Exited              kube-proxy                0                   4b7a03b7f681c       kube-proxy-fqf9q
	e1c91b7db4ab1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   d488537da1381       etcd-ha-174628
	889d28a83e85b       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      14 minutes ago       Exited              kube-scheduler            0                   4c7f495eb3d6a       kube-scheduler-ha-174628
	
	
	==> coredns [6f83ba4f05083b01b85083344f1ceece3524c9ed469106ad62b56da508a5126d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:34304->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1729886385]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:42:13.196) (total time: 10058ms):
	Trace[1729886385]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:34304->10.96.0.1:443: read: connection reset by peer 10058ms (17:42:23.255)
	Trace[1729886385]: [10.058588339s] [10.058588339s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:34304->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:34334->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:34334->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9] <==
	[INFO] 10.244.0.4:42628 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001405769s
	[INFO] 10.244.0.4:53106 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132475s
	[INFO] 10.244.1.2:56143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010532s
	[INFO] 10.244.1.2:57864 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093166s
	[INFO] 10.244.1.2:36333 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127244s
	[INFO] 10.244.1.2:59545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001305574s
	[INFO] 10.244.1.2:38967 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068655s
	[INFO] 10.244.2.2:42756 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113607s
	[INFO] 10.244.2.2:43563 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069199s
	[INFO] 10.244.0.4:59480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109399s
	[INFO] 10.244.0.4:42046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068182s
	[INFO] 10.244.0.4:52729 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087202s
	[INFO] 10.244.1.2:54148 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075008s
	[INFO] 10.244.2.2:34613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101677s
	[INFO] 10.244.2.2:34221 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203479s
	[INFO] 10.244.0.4:35705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081127s
	[INFO] 10.244.0.4:36734 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090761s
	[INFO] 10.244.1.2:34328 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093559s
	[INFO] 10.244.1.2:39930 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149652s
	[INFO] 10.244.1.2:55584 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101975s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb] <==
	[INFO] 10.244.1.2:51622 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00157319s
	[INFO] 10.244.2.2:60810 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001843s
	[INFO] 10.244.2.2:59317 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00028437s
	[INFO] 10.244.2.2:38028 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131271s
	[INFO] 10.244.0.4:34076 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171504s
	[INFO] 10.244.0.4:47718 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126429s
	[INFO] 10.244.1.2:45110 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001972368s
	[INFO] 10.244.1.2:56072 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000151997s
	[INFO] 10.244.1.2:56149 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091586s
	[INFO] 10.244.2.2:58101 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116587s
	[INFO] 10.244.2.2:38105 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059217s
	[INFO] 10.244.0.4:33680 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067251s
	[INFO] 10.244.1.2:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149516s
	[INFO] 10.244.1.2:49668 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120356s
	[INFO] 10.244.1.2:39442 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065763s
	[INFO] 10.244.2.2:49955 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116571s
	[INFO] 10.244.2.2:46651 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013941s
	[INFO] 10.244.0.4:39128 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097533s
	[INFO] 10.244.0.4:36840 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000042262s
	[INFO] 10.244.1.2:36575 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084857s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cb8f9753c4a91113f4d19fb976afdc57ba90f879488ae102acb94522b4753834] <==
	Trace[1915939027]: [10.00088773s] [10.00088773s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:41532->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:41532->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41540->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[389132317]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:42:16.236) (total time: 10069ms):
	Trace[389132317]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41540->10.96.0.1:443: read: connection reset by peer 10069ms (17:42:26.306)
	Trace[389132317]: [10.069490426s] [10.069490426s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41540->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-174628
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T17_29_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:29:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:44:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:42:42 +0000   Wed, 17 Jul 2024 17:29:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:42:42 +0000   Wed, 17 Jul 2024 17:29:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:42:42 +0000   Wed, 17 Jul 2024 17:29:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:42:42 +0000   Wed, 17 Jul 2024 17:30:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-174628
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 38d679c72879470c96b5b9e9677b521d
	  System UUID:                38d679c7-2879-470c-96b5-b9e9677b521d
	  Boot ID:                    dc99f06a-b6ac-4ceb-b149-a41be92c5af1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8zv26              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-ljjl7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-nb567             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-174628                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-k6jnp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-174628             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-174628    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-fqf9q                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-174628             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-174628                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 106s                   kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-174628 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-174628 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-174628 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-174628 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Warning  ContainerGCFailed        2m37s (x2 over 3m37s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           101s                   node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Normal   RegisteredNode           94s                    node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Normal   RegisteredNode           25s                    node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	
	
	Name:               ha-174628-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T17_32_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:32:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:44:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:43:55 +0000   Wed, 17 Jul 2024 17:42:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:43:55 +0000   Wed, 17 Jul 2024 17:42:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:43:55 +0000   Wed, 17 Jul 2024 17:42:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:43:55 +0000   Wed, 17 Jul 2024 17:42:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-174628-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 903b989e686a4ab6b3e3c3b6b498bfac
	  System UUID:                903b989e-686a-4ab6-b3e3-c3b6b498bfac
	  Boot ID:                    67d94d54-1e0a-423d-8e6e-512d0032972e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ftgzz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-174628-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-79txz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-174628-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-174628-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-7lchn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-174628-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-174628-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 79s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-174628-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-174628-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-174628-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  NodeNotReady             9m1s                   node-controller  Node ha-174628-m02 status is now: NodeNotReady
	  Normal  Starting                 2m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m15s (x8 over 2m15s)  kubelet          Node ha-174628-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s (x8 over 2m15s)  kubelet          Node ha-174628-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s (x7 over 2m15s)  kubelet          Node ha-174628-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           101s                   node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  RegisteredNode           94s                    node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  RegisteredNode           25s                    node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	
	
	Name:               ha-174628-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T17_33_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:33:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:44:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:44:03 +0000   Wed, 17 Jul 2024 17:43:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:44:03 +0000   Wed, 17 Jul 2024 17:43:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:44:03 +0000   Wed, 17 Jul 2024 17:43:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:44:03 +0000   Wed, 17 Jul 2024 17:43:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    ha-174628-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e252934bd064e64b4b5442d8b76155e
	  System UUID:                7e252934-bd06-4e64-b4b5-442d8b76155e
	  Boot ID:                    bdae777f-fb99-40e7-a42b-9c1b14128176
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5mnv5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-174628-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-p7tg6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-174628-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-174628-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-tjkww                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-174628-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-174628-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 42s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node ha-174628-m03 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           11m                node-controller  Node ha-174628-m03 event: Registered Node ha-174628-m03 in Controller
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node ha-174628-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node ha-174628-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-174628-m03 event: Registered Node ha-174628-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-174628-m03 event: Registered Node ha-174628-m03 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-174628-m03 status is now: NodeReady
	  Normal   RegisteredNode           100s               node-controller  Node ha-174628-m03 event: Registered Node ha-174628-m03 in Controller
	  Normal   RegisteredNode           94s                node-controller  Node ha-174628-m03 event: Registered Node ha-174628-m03 in Controller
	  Normal   NodeNotReady             60s                node-controller  Node ha-174628-m03 status is now: NodeNotReady
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  58s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  58s (x2 over 58s)  kubelet          Node ha-174628-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x2 over 58s)  kubelet          Node ha-174628-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x2 over 58s)  kubelet          Node ha-174628-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 58s                kubelet          Node ha-174628-m03 has been rebooted, boot id: bdae777f-fb99-40e7-a42b-9c1b14128176
	  Normal   NodeReady                58s                kubelet          Node ha-174628-m03 status is now: NodeReady
	  Normal   RegisteredNode           25s                node-controller  Node ha-174628-m03 event: Registered Node ha-174628-m03 in Controller
	
	
	Name:               ha-174628-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T17_34_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:34:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:44:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:44:23 +0000   Wed, 17 Jul 2024 17:44:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:44:23 +0000   Wed, 17 Jul 2024 17:44:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:44:23 +0000   Wed, 17 Jul 2024 17:44:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:44:23 +0000   Wed, 17 Jul 2024 17:44:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-174628-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1beb916d1ab94a9e97732204939d8f7c
	  System UUID:                1beb916d-1ab9-4a9e-9773-2204939d8f7c
	  Boot ID:                    9c57600f-4318-45f1-8bee-1e5facd32841
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pt58p       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-gb548    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-174628-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-174628-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-174628-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal   NodeReady                9m55s              kubelet          Node ha-174628-m04 status is now: NodeReady
	  Normal   RegisteredNode           101s               node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal   NodeNotReady             61s                node-controller  Node ha-174628-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           26s                node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)    kubelet          Node ha-174628-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet          Node ha-174628-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)    kubelet          Node ha-174628-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s (x2 over 8s)    kubelet          Node ha-174628-m04 has been rebooted, boot id: 9c57600f-4318-45f1-8bee-1e5facd32841
	  Normal   NodeReady                8s (x2 over 8s)    kubelet          Node ha-174628-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.259835] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.065539] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054802] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.175947] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.103995] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.251338] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.953322] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +4.318399] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +0.059032] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.943760] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.083790] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.749430] kauditd_printk_skb: 18 callbacks suppressed
	[Jul17 17:30] kauditd_printk_skb: 38 callbacks suppressed
	[Jul17 17:32] kauditd_printk_skb: 26 callbacks suppressed
	[Jul17 17:38] kauditd_printk_skb: 1 callbacks suppressed
	[Jul17 17:41] systemd-fstab-generator[3683]: Ignoring "noauto" option for root device
	[  +0.138073] systemd-fstab-generator[3695]: Ignoring "noauto" option for root device
	[  +0.172600] systemd-fstab-generator[3709]: Ignoring "noauto" option for root device
	[  +0.161501] systemd-fstab-generator[3721]: Ignoring "noauto" option for root device
	[  +0.271525] systemd-fstab-generator[3749]: Ignoring "noauto" option for root device
	[  +0.728523] systemd-fstab-generator[3851]: Ignoring "noauto" option for root device
	[  +5.553998] kauditd_printk_skb: 122 callbacks suppressed
	[Jul17 17:42] kauditd_printk_skb: 85 callbacks suppressed
	[ +43.397968] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [06a32601104ebf78674d54d79144e835165efe3710687b6a61a6e11009905acd] <==
	{"level":"warn","ts":"2024-07-17T17:43:29.699904Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.187:2380/version","remote-member-id":"6dbdd402c8b44d8e","error":"Get \"https://192.168.39.187:2380/version\": dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:29.700052Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6dbdd402c8b44d8e","error":"Get \"https://192.168.39.187:2380/version\": dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:32.169865Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6dbdd402c8b44d8e","rtt":"0s","error":"dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:32.16995Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6dbdd402c8b44d8e","rtt":"0s","error":"dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:33.701655Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.187:2380/version","remote-member-id":"6dbdd402c8b44d8e","error":"Get \"https://192.168.39.187:2380/version\": dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:33.701759Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6dbdd402c8b44d8e","error":"Get \"https://192.168.39.187:2380/version\": dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:37.170588Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6dbdd402c8b44d8e","rtt":"0s","error":"dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:37.170617Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6dbdd402c8b44d8e","rtt":"0s","error":"dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:37.704229Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.187:2380/version","remote-member-id":"6dbdd402c8b44d8e","error":"Get \"https://192.168.39.187:2380/version\": dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:37.704542Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6dbdd402c8b44d8e","error":"Get \"https://192.168.39.187:2380/version\": dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:41.706412Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.187:2380/version","remote-member-id":"6dbdd402c8b44d8e","error":"Get \"https://192.168.39.187:2380/version\": dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:41.706487Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6dbdd402c8b44d8e","error":"Get \"https://192.168.39.187:2380/version\": dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:42.17145Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6dbdd402c8b44d8e","rtt":"0s","error":"dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:42.171629Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6dbdd402c8b44d8e","rtt":"0s","error":"dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:45.708386Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.187:2380/version","remote-member-id":"6dbdd402c8b44d8e","error":"Get \"https://192.168.39.187:2380/version\": dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:45.708549Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6dbdd402c8b44d8e","error":"Get \"https://192.168.39.187:2380/version\": dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-17T17:43:46.495098Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:43:46.500925Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:43:46.501137Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:43:46.50919Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3276445ff8d31e34","to":"6dbdd402c8b44d8e","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-17T17:43:46.509314Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:43:46.511188Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3276445ff8d31e34","to":"6dbdd402c8b44d8e","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-17T17:43:46.51135Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"warn","ts":"2024-07-17T17:43:47.172078Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6dbdd402c8b44d8e","rtt":"0s","error":"dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:47.172263Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6dbdd402c8b44d8e","rtt":"0s","error":"dial tcp 192.168.39.187:2380: connect: connection refused"}
	
	
	==> etcd [e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147] <==
	2024/07/17 17:40:22 WARNING: [core] [Server #9] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T17:40:22.292614Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"833.91074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-17T17:40:22.301553Z","caller":"traceutil/trace.go:171","msg":"trace[1782621430] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; }","duration":"843.026193ms","start":"2024-07-17T17:40:21.458521Z","end":"2024-07-17T17:40:22.301548Z","steps":["trace[1782621430] 'agreement among raft nodes before linearized reading'  (duration: 834.088765ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:40:22.301595Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:40:21.458514Z","time spent":"843.07347ms","remote":"127.0.0.1:46628","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 "}
	2024/07/17 17:40:22 WARNING: [core] [Server #9] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T17:40:22.434956Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T17:40:22.435008Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T17:40:22.436477Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3276445ff8d31e34","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-17T17:40:22.436767Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.436811Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.436855Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.436959Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.437014Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.437063Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.437091Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.437114Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.43714Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.437179Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.437259Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.43732Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.437368Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.437395Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.440008Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-07-17T17:40:22.440113Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-07-17T17:40:22.440135Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-174628","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	
	
	==> kernel <==
	 17:44:31 up 15 min,  0 users,  load average: 0.95, 0.60, 0.34
	Linux ha-174628 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0] <==
	I0717 17:40:01.261382       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:40:01.261401       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:40:01.261568       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:40:01.261590       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:40:01.261729       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:40:01.261780       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	E0717 17:40:10.369232       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1947&timeout=6m0s&timeoutSeconds=360&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	I0717 17:40:11.258112       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:40:11.259039       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:40:11.259293       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:40:11.259321       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:40:11.259391       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:40:11.259410       1 main.go:303] handling current node
	I0717 17:40:11.259435       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:40:11.259452       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	W0717 17:40:20.470774       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	E0717 17:40:20.471203       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	I0717 17:40:21.258757       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:40:21.258847       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:40:21.259087       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:40:21.259143       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:40:21.259219       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:40:21.259240       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:40:21.259301       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:40:21.259327       1 main.go:303] handling current node
	
	
	==> kindnet [967342f385c9ab30f017f6226ebc0dd6e6f535d7abf22a5884c63765726387b1] <==
	I0717 17:43:52.087869       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:44:02.080874       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:44:02.080954       1 main.go:303] handling current node
	I0717 17:44:02.080983       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:44:02.080991       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:44:02.081145       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:44:02.081165       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:44:02.081264       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:44:02.081284       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:44:12.085482       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:44:12.085568       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:44:12.085781       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:44:12.085800       1 main.go:303] handling current node
	I0717 17:44:12.085812       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:44:12.085817       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:44:12.085886       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:44:12.085901       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:44:22.089141       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:44:22.089263       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:44:22.089437       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:44:22.089474       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:44:22.089554       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:44:22.089574       1 main.go:303] handling current node
	I0717 17:44:22.089604       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:44:22.089624       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [5aeb0a6d29b3db9397dbbe275b13b7c97ca27bb9e4805af79da925fbad61b1af] <==
	I0717 17:42:01.603289       1 options.go:221] external host was not specified, using 192.168.39.100
	I0717 17:42:01.607520       1 server.go:148] Version: v1.30.2
	I0717 17:42:01.607623       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:42:02.239127       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 17:42:02.248280       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 17:42:02.248313       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 17:42:02.248477       1 instance.go:299] Using reconciler: lease
	I0717 17:42:02.249029       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0717 17:42:22.237174       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 17:42:22.238197       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0717 17:42:22.249841       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0717 17:42:22.249844       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [731dfd6c523fb7b9024ae6d32cdb21435f506403f647995fd463c05da6ca3883] <==
	I0717 17:42:43.475327       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0717 17:42:43.475377       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0717 17:42:43.557796       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 17:42:43.559104       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 17:42:43.559133       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 17:42:43.559269       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 17:42:43.559545       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 17:42:43.561986       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 17:42:43.568719       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0717 17:42:43.574605       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.187 192.168.39.97]
	I0717 17:42:43.575950       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 17:42:43.576129       1 aggregator.go:165] initial CRD sync complete...
	I0717 17:42:43.576180       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 17:42:43.576206       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 17:42:43.576231       1 cache.go:39] Caches are synced for autoregister controller
	I0717 17:42:43.582005       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 17:42:43.591938       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:42:43.591958       1 policy_source.go:224] refreshing policies
	I0717 17:42:43.657968       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 17:42:43.676180       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 17:42:43.686073       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0717 17:42:43.689417       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0717 17:42:44.468749       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 17:42:45.013622       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.187 192.168.39.97]
	W0717 17:42:55.010584       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.97]
	
	
	==> kube-controller-manager [52d1cab66f48cbd8674b8f411ab80487389fb12b4710edc84607d1ef666b676a] <==
	I0717 17:42:02.795212       1 serving.go:380] Generated self-signed cert in-memory
	I0717 17:42:03.017741       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 17:42:03.017778       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:42:03.019260       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 17:42:03.019429       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0717 17:42:03.019506       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 17:42:03.019790       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0717 17:42:23.257017       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.100:8443/healthz\": dial tcp 192.168.39.100:8443: connect: connection refused"
	
	
	==> kube-controller-manager [c844fa26b05ab402b5550aaf261619fd0941934823e012a82a8ef73c185a6f5a] <==
	I0717 17:42:56.733014       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0717 17:42:56.812184       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 17:42:56.829545       1 shared_informer.go:320] Caches are synced for stateful set
	I0717 17:42:56.830701       1 shared_informer.go:320] Caches are synced for disruption
	I0717 17:42:56.913880       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 17:42:56.932689       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 17:42:57.336795       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 17:42:57.379127       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 17:42:57.379169       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 17:43:01.802031       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-vvw4j EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-vvw4j\": the object has been modified; please apply your changes to the latest version and try again"
	I0717 17:43:01.802526       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"d202c6c6-18c9-45ac-aae5-303feb0dd1c3", APIVersion:"v1", ResourceVersion:"249", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-vvw4j EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-vvw4j": the object has been modified; please apply your changes to the latest version and try again
	I0717 17:43:01.817168       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-vvw4j EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-vvw4j\": the object has been modified; please apply your changes to the latest version and try again"
	I0717 17:43:01.817741       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"d202c6c6-18c9-45ac-aae5-303feb0dd1c3", APIVersion:"v1", ResourceVersion:"249", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-vvw4j EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-vvw4j": the object has been modified; please apply your changes to the latest version and try again
	I0717 17:43:01.850532       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="90.197566ms"
	I0717 17:43:01.888996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.059965ms"
	I0717 17:43:01.889239       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="153.273µs"
	I0717 17:43:08.773577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.68µs"
	I0717 17:43:14.071427       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.00281ms"
	I0717 17:43:14.071553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.516µs"
	I0717 17:43:30.257895       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.086635ms"
	I0717 17:43:30.258044       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.335µs"
	I0717 17:43:33.060490       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.536µs"
	I0717 17:43:52.261577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.597367ms"
	I0717 17:43:52.262124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.252µs"
	I0717 17:44:23.087045       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174628-m04"
	
	
	==> kube-proxy [d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78] <==
	E0717 17:39:12.066154       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:15.137076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:15.137129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:15.137197       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:15.137211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:15.137275       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:15.137382       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:21.282476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:21.282708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:21.282494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:21.282753       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:21.282889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:21.282992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:30.498010       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:30.498284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:33.570053       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:33.570458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:33.570512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:33.570472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:58.146189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:58.146389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:58.146263       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:58.146512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:58.146330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:58.146549       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [ef1cce1c506e03a9c4fbe0f8d38792493de36070eb5cdd03a5cedf085c157a6d] <==
	I0717 17:42:02.638020       1 server_linux.go:69] "Using iptables proxy"
	E0717 17:42:04.097177       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174628\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 17:42:07.170005       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174628\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 17:42:10.241936       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174628\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 17:42:16.387081       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174628\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 17:42:25.602197       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174628\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0717 17:42:44.279608       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0717 17:42:44.385416       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:42:44.385493       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:42:44.385514       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:42:44.387822       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:42:44.388044       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:42:44.388070       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:42:44.389616       1 config.go:192] "Starting service config controller"
	I0717 17:42:44.389687       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:42:44.389740       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:42:44.389758       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:42:44.390380       1 config.go:319] "Starting node config controller"
	I0717 17:42:44.390439       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:42:44.490757       1 shared_informer.go:320] Caches are synced for node config
	I0717 17:42:44.490820       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:42:44.490850       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0606a3ecaf44ed6f342c1b254dc982cccb31c0296258c76f4f5f18927216ea47] <==
	W0717 17:42:38.168739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.100:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:38.168807       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.100:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:38.635500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.100:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:38.635607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.100:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:38.731565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.100:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:38.731719       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.100:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:38.975577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.100:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:38.975653       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.100:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:39.406984       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:39.407088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:39.611600       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:39.611780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:40.052319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:40.052423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:40.652860       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.100:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:40.652983       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.100:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:41.043736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.100:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:41.043868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.100:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:41.243289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:41.243327       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:43.485375       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 17:42:43.485427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 17:42:43.485511       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:42:43.485540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0717 17:43:03.162737       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9] <==
	W0717 17:40:17.790129       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 17:40:17.790291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 17:40:18.294745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 17:40:18.294915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 17:40:18.308036       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 17:40:18.308167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 17:40:18.588842       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 17:40:18.589014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 17:40:18.840315       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 17:40:18.840391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 17:40:18.898586       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 17:40:18.898713       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 17:40:19.086464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:40:19.086560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:40:19.097080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 17:40:19.097166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 17:40:19.125539       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 17:40:19.125571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 17:40:19.271993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 17:40:19.272113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 17:40:19.372021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:40:19.372107       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:40:19.578961       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:40:19.579004       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:40:22.259775       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 17:42:35 ha-174628 kubelet[1358]: I0717 17:42:35.196115    1358 scope.go:117] "RemoveContainer" containerID="6d0e4ff14e576853890e93a9a6a937dd70d94bc7822374634f11e460ae6b3749"
	Jul 17 17:42:35 ha-174628 kubelet[1358]: E0717 17:42:35.196370    1358 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8c0601bb-36f6-434d-8e9d-1e326bf682f5)\"" pod="kube-system/storage-provisioner" podUID="8c0601bb-36f6-434d-8e9d-1e326bf682f5"
	Jul 17 17:42:37 ha-174628 kubelet[1358]: E0717 17:42:37.889140    1358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-174628?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 17 17:42:37 ha-174628 kubelet[1358]: W0717 17:42:37.889931    1358 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=1960": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 17 17:42:37 ha-174628 kubelet[1358]: E0717 17:42:37.890135    1358 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&resourceVersion=1960": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 17 17:42:37 ha-174628 kubelet[1358]: I0717 17:42:37.890004    1358 status_manager.go:853] "Failed to get status for pod" podUID="9bca93ed-aca5-4540-990c-d9e6209d12d0" pod="kube-system/kindnet-k6jnp" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kindnet-k6jnp\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 17 17:42:40 ha-174628 kubelet[1358]: I0717 17:42:40.961148    1358 status_manager.go:853] "Failed to get status for pod" podUID="1739ac64-be05-4438-9a8f-a0d2821a1650" pod="kube-system/coredns-7db6d8ff4d-nb567" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nb567\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 17 17:42:40 ha-174628 kubelet[1358]: E0717 17:42:40.961156    1358 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-174628.17e310c18f12e43e  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-174628,UID:de7a365d4a82da636f5e615f6e397e41,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-174628,},FirstTimestamp:2024-07-17 17:38:26.10077395 +0000 UTC m=+513.038874484,LastTimestamp:2024-07-17 17:38:26.10077395 +0000 UTC m=+513.038874484,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:n
il,ReportingController:kubelet,ReportingInstance:ha-174628,}"
	Jul 17 17:42:41 ha-174628 kubelet[1358]: I0717 17:42:41.193412    1358 scope.go:117] "RemoveContainer" containerID="5aeb0a6d29b3db9397dbbe275b13b7c97ca27bb9e4805af79da925fbad61b1af"
	Jul 17 17:42:44 ha-174628 kubelet[1358]: I0717 17:42:44.193476    1358 scope.go:117] "RemoveContainer" containerID="52d1cab66f48cbd8674b8f411ab80487389fb12b4710edc84607d1ef666b676a"
	Jul 17 17:42:45 ha-174628 kubelet[1358]: I0717 17:42:45.967997    1358 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-8zv26" podStartSLOduration=539.324743175 podStartE2EDuration="9m1.967970066s" podCreationTimestamp="2024-07-17 17:33:44 +0000 UTC" firstStartedPulling="2024-07-17 17:33:45.346548135 +0000 UTC m=+232.284648679" lastFinishedPulling="2024-07-17 17:33:47.989775026 +0000 UTC m=+234.927875570" observedRunningTime="2024-07-17 17:33:48.135796816 +0000 UTC m=+235.073897365" watchObservedRunningTime="2024-07-17 17:42:45.967970066 +0000 UTC m=+772.906070618"
	Jul 17 17:42:49 ha-174628 kubelet[1358]: I0717 17:42:49.193344    1358 scope.go:117] "RemoveContainer" containerID="6d0e4ff14e576853890e93a9a6a937dd70d94bc7822374634f11e460ae6b3749"
	Jul 17 17:42:53 ha-174628 kubelet[1358]: E0717 17:42:53.208589    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:42:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:42:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:42:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:42:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:42:53 ha-174628 kubelet[1358]: I0717 17:42:53.252352    1358 scope.go:117] "RemoveContainer" containerID="4c2a82d5779c30132aa024c001d6b11525959eaf1e17d978f6a60cf60c14ea2e"
	Jul 17 17:43:45 ha-174628 kubelet[1358]: I0717 17:43:45.193472    1358 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-174628" podUID="b2d62768-e68e-4ce3-ad84-31ddac00688e"
	Jul 17 17:43:45 ha-174628 kubelet[1358]: I0717 17:43:45.213724    1358 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-174628"
	Jul 17 17:43:53 ha-174628 kubelet[1358]: E0717 17:43:53.219557    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:43:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:43:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:43:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:43:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 17:44:30.054896   40661 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19283-14386/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174628 -n ha-174628
helpers_test.go:261: (dbg) Run:  kubectl --context ha-174628 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (372.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 stop -v=7 --alsologtostderr
E0717 17:45:41.791330   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174628 stop -v=7 --alsologtostderr: exit status 82 (2m0.455480374s)

                                                
                                                
-- stdout --
	* Stopping node "ha-174628-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:44:49.572547   41071 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:44:49.572674   41071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:44:49.572684   41071 out.go:304] Setting ErrFile to fd 2...
	I0717 17:44:49.572689   41071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:44:49.572859   41071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:44:49.573132   41071 out.go:298] Setting JSON to false
	I0717 17:44:49.573224   41071 mustload.go:65] Loading cluster: ha-174628
	I0717 17:44:49.573602   41071 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:44:49.573701   41071 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:44:49.573880   41071 mustload.go:65] Loading cluster: ha-174628
	I0717 17:44:49.574046   41071 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:44:49.574077   41071 stop.go:39] StopHost: ha-174628-m04
	I0717 17:44:49.574485   41071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:44:49.574525   41071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:44:49.589733   41071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37259
	I0717 17:44:49.590129   41071 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:44:49.590714   41071 main.go:141] libmachine: Using API Version  1
	I0717 17:44:49.590736   41071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:44:49.591082   41071 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:44:49.593558   41071 out.go:177] * Stopping node "ha-174628-m04"  ...
	I0717 17:44:49.594775   41071 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 17:44:49.594800   41071 main.go:141] libmachine: (ha-174628-m04) Calling .DriverName
	I0717 17:44:49.595034   41071 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 17:44:49.595054   41071 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHHostname
	I0717 17:44:49.597637   41071 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:44:49.598015   41071 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:44:17 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:44:49.598048   41071 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:44:49.598204   41071 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHPort
	I0717 17:44:49.598379   41071 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHKeyPath
	I0717 17:44:49.598515   41071 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHUsername
	I0717 17:44:49.598670   41071 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m04/id_rsa Username:docker}
	I0717 17:44:49.683100   41071 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 17:44:49.735035   41071 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 17:44:49.786582   41071 main.go:141] libmachine: Stopping "ha-174628-m04"...
	I0717 17:44:49.786666   41071 main.go:141] libmachine: (ha-174628-m04) Calling .GetState
	I0717 17:44:49.788168   41071 main.go:141] libmachine: (ha-174628-m04) Calling .Stop
	I0717 17:44:49.792033   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 0/120
	I0717 17:44:50.793406   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 1/120
	I0717 17:44:51.795346   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 2/120
	I0717 17:44:52.796558   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 3/120
	I0717 17:44:53.797759   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 4/120
	I0717 17:44:54.799784   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 5/120
	I0717 17:44:55.802221   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 6/120
	I0717 17:44:56.803507   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 7/120
	I0717 17:44:57.804891   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 8/120
	I0717 17:44:58.806146   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 9/120
	I0717 17:44:59.808500   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 10/120
	I0717 17:45:00.809835   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 11/120
	I0717 17:45:01.811362   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 12/120
	I0717 17:45:02.812664   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 13/120
	I0717 17:45:03.814073   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 14/120
	I0717 17:45:04.815638   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 15/120
	I0717 17:45:05.817422   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 16/120
	I0717 17:45:06.818578   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 17/120
	I0717 17:45:07.820044   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 18/120
	I0717 17:45:08.821544   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 19/120
	I0717 17:45:09.823648   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 20/120
	I0717 17:45:10.824884   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 21/120
	I0717 17:45:11.826480   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 22/120
	I0717 17:45:12.827848   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 23/120
	I0717 17:45:13.829388   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 24/120
	I0717 17:45:14.831266   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 25/120
	I0717 17:45:15.832639   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 26/120
	I0717 17:45:16.833953   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 27/120
	I0717 17:45:17.835879   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 28/120
	I0717 17:45:18.837172   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 29/120
	I0717 17:45:19.839330   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 30/120
	I0717 17:45:20.840594   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 31/120
	I0717 17:45:21.841944   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 32/120
	I0717 17:45:22.843430   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 33/120
	I0717 17:45:23.845563   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 34/120
	I0717 17:45:24.846923   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 35/120
	I0717 17:45:25.848120   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 36/120
	I0717 17:45:26.849565   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 37/120
	I0717 17:45:27.851295   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 38/120
	I0717 17:45:28.852609   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 39/120
	I0717 17:45:29.854889   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 40/120
	I0717 17:45:30.856481   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 41/120
	I0717 17:45:31.857874   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 42/120
	I0717 17:45:32.859512   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 43/120
	I0717 17:45:33.861432   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 44/120
	I0717 17:45:34.863410   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 45/120
	I0717 17:45:35.865727   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 46/120
	I0717 17:45:36.867179   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 47/120
	I0717 17:45:37.868452   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 48/120
	I0717 17:45:38.869907   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 49/120
	I0717 17:45:39.871768   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 50/120
	I0717 17:45:40.873064   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 51/120
	I0717 17:45:41.874410   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 52/120
	I0717 17:45:42.875769   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 53/120
	I0717 17:45:43.876994   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 54/120
	I0717 17:45:44.878842   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 55/120
	I0717 17:45:45.880413   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 56/120
	I0717 17:45:46.881715   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 57/120
	I0717 17:45:47.883451   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 58/120
	I0717 17:45:48.884900   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 59/120
	I0717 17:45:49.886413   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 60/120
	I0717 17:45:50.887559   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 61/120
	I0717 17:45:51.888773   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 62/120
	I0717 17:45:52.890085   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 63/120
	I0717 17:45:53.891788   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 64/120
	I0717 17:45:54.893822   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 65/120
	I0717 17:45:55.895414   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 66/120
	I0717 17:45:56.896838   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 67/120
	I0717 17:45:57.897983   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 68/120
	I0717 17:45:58.899254   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 69/120
	I0717 17:45:59.901421   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 70/120
	I0717 17:46:00.902600   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 71/120
	I0717 17:46:01.903778   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 72/120
	I0717 17:46:02.904980   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 73/120
	I0717 17:46:03.906351   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 74/120
	I0717 17:46:04.908141   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 75/120
	I0717 17:46:05.909343   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 76/120
	I0717 17:46:06.911371   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 77/120
	I0717 17:46:07.912931   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 78/120
	I0717 17:46:08.914264   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 79/120
	I0717 17:46:09.916129   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 80/120
	I0717 17:46:10.918262   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 81/120
	I0717 17:46:11.919446   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 82/120
	I0717 17:46:12.920780   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 83/120
	I0717 17:46:13.922620   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 84/120
	I0717 17:46:14.924460   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 85/120
	I0717 17:46:15.925978   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 86/120
	I0717 17:46:16.927265   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 87/120
	I0717 17:46:17.928447   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 88/120
	I0717 17:46:18.929760   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 89/120
	I0717 17:46:19.931720   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 90/120
	I0717 17:46:20.933478   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 91/120
	I0717 17:46:21.935462   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 92/120
	I0717 17:46:22.937009   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 93/120
	I0717 17:46:23.938387   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 94/120
	I0717 17:46:24.940294   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 95/120
	I0717 17:46:25.941955   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 96/120
	I0717 17:46:26.943563   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 97/120
	I0717 17:46:27.945298   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 98/120
	I0717 17:46:28.947399   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 99/120
	I0717 17:46:29.949492   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 100/120
	I0717 17:46:30.951349   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 101/120
	I0717 17:46:31.953151   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 102/120
	I0717 17:46:32.955405   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 103/120
	I0717 17:46:33.956913   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 104/120
	I0717 17:46:34.958368   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 105/120
	I0717 17:46:35.959666   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 106/120
	I0717 17:46:36.961028   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 107/120
	I0717 17:46:37.962420   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 108/120
	I0717 17:46:38.963677   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 109/120
	I0717 17:46:39.965654   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 110/120
	I0717 17:46:40.966996   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 111/120
	I0717 17:46:41.968575   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 112/120
	I0717 17:46:42.969925   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 113/120
	I0717 17:46:43.971407   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 114/120
	I0717 17:46:44.973160   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 115/120
	I0717 17:46:45.975406   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 116/120
	I0717 17:46:46.976684   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 117/120
	I0717 17:46:47.978042   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 118/120
	I0717 17:46:48.979518   41071 main.go:141] libmachine: (ha-174628-m04) Waiting for machine to stop 119/120
	I0717 17:46:49.980717   41071 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 17:46:49.980785   41071 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 17:46:49.983062   41071 out.go:177] 
	W0717 17:46:49.984594   41071 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 17:46:49.984612   41071 out.go:239] * 
	* 
	W0717 17:46:49.986883   41071 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 17:46:49.988226   41071 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-174628 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr: exit status 3 (18.911355453s)

                                                
                                                
-- stdout --
	ha-174628
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-174628-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:46:50.030661   41490 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:46:50.030894   41490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:46:50.030903   41490 out.go:304] Setting ErrFile to fd 2...
	I0717 17:46:50.030907   41490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:46:50.031064   41490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:46:50.031218   41490 out.go:298] Setting JSON to false
	I0717 17:46:50.031244   41490 mustload.go:65] Loading cluster: ha-174628
	I0717 17:46:50.031293   41490 notify.go:220] Checking for updates...
	I0717 17:46:50.031595   41490 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:46:50.031615   41490 status.go:255] checking status of ha-174628 ...
	I0717 17:46:50.031987   41490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:46:50.032037   41490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:46:50.051871   41490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36759
	I0717 17:46:50.052339   41490 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:46:50.053091   41490 main.go:141] libmachine: Using API Version  1
	I0717 17:46:50.053120   41490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:46:50.053453   41490 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:46:50.053667   41490 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:46:50.055288   41490 status.go:330] ha-174628 host status = "Running" (err=<nil>)
	I0717 17:46:50.055304   41490 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:46:50.055691   41490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:46:50.055732   41490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:46:50.070791   41490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39457
	I0717 17:46:50.071266   41490 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:46:50.071715   41490 main.go:141] libmachine: Using API Version  1
	I0717 17:46:50.071737   41490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:46:50.072041   41490 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:46:50.072214   41490 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:46:50.075094   41490 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:46:50.075471   41490 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:46:50.075502   41490 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:46:50.075683   41490 host.go:66] Checking if "ha-174628" exists ...
	I0717 17:46:50.075950   41490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:46:50.075979   41490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:46:50.090899   41490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40415
	I0717 17:46:50.091343   41490 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:46:50.091783   41490 main.go:141] libmachine: Using API Version  1
	I0717 17:46:50.091803   41490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:46:50.092088   41490 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:46:50.092240   41490 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:46:50.092439   41490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:46:50.092468   41490 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:46:50.094943   41490 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:46:50.095344   41490 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:46:50.095371   41490 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:46:50.095485   41490 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:46:50.095760   41490 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:46:50.095939   41490 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:46:50.096107   41490 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:46:50.190511   41490 ssh_runner.go:195] Run: systemctl --version
	I0717 17:46:50.197190   41490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:46:50.213172   41490 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:46:50.213197   41490 api_server.go:166] Checking apiserver status ...
	I0717 17:46:50.213242   41490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:46:50.232069   41490 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4958/cgroup
	W0717 17:46:50.245953   41490 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4958/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:46:50.246001   41490 ssh_runner.go:195] Run: ls
	I0717 17:46:50.250340   41490 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:46:50.257357   41490 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:46:50.257380   41490 status.go:422] ha-174628 apiserver status = Running (err=<nil>)
	I0717 17:46:50.257389   41490 status.go:257] ha-174628 status: &{Name:ha-174628 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:46:50.257412   41490 status.go:255] checking status of ha-174628-m02 ...
	I0717 17:46:50.257711   41490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:46:50.257743   41490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:46:50.272035   41490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39071
	I0717 17:46:50.272446   41490 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:46:50.272922   41490 main.go:141] libmachine: Using API Version  1
	I0717 17:46:50.272961   41490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:46:50.273249   41490 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:46:50.273422   41490 main.go:141] libmachine: (ha-174628-m02) Calling .GetState
	I0717 17:46:50.274815   41490 status.go:330] ha-174628-m02 host status = "Running" (err=<nil>)
	I0717 17:46:50.274828   41490 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:46:50.275193   41490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:46:50.275242   41490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:46:50.292221   41490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34149
	I0717 17:46:50.292607   41490 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:46:50.293071   41490 main.go:141] libmachine: Using API Version  1
	I0717 17:46:50.293093   41490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:46:50.293400   41490 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:46:50.293588   41490 main.go:141] libmachine: (ha-174628-m02) Calling .GetIP
	I0717 17:46:50.295860   41490 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:46:50.296338   41490 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:42:05 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:46:50.296362   41490 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:46:50.296500   41490 host.go:66] Checking if "ha-174628-m02" exists ...
	I0717 17:46:50.296774   41490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:46:50.296803   41490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:46:50.310924   41490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I0717 17:46:50.311387   41490 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:46:50.311847   41490 main.go:141] libmachine: Using API Version  1
	I0717 17:46:50.311874   41490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:46:50.312294   41490 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:46:50.312454   41490 main.go:141] libmachine: (ha-174628-m02) Calling .DriverName
	I0717 17:46:50.312605   41490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:46:50.312631   41490 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHHostname
	I0717 17:46:50.315037   41490 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:46:50.315441   41490 main.go:141] libmachine: (ha-174628-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:10:53", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:42:05 +0000 UTC Type:0 Mac:52:54:00:26:10:53 Iaid: IPaddr:192.168.39.97 Prefix:24 Hostname:ha-174628-m02 Clientid:01:52:54:00:26:10:53}
	I0717 17:46:50.315465   41490 main.go:141] libmachine: (ha-174628-m02) DBG | domain ha-174628-m02 has defined IP address 192.168.39.97 and MAC address 52:54:00:26:10:53 in network mk-ha-174628
	I0717 17:46:50.315612   41490 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHPort
	I0717 17:46:50.315774   41490 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHKeyPath
	I0717 17:46:50.315894   41490 main.go:141] libmachine: (ha-174628-m02) Calling .GetSSHUsername
	I0717 17:46:50.316025   41490 sshutil.go:53] new ssh client: &{IP:192.168.39.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m02/id_rsa Username:docker}
	I0717 17:46:50.402585   41490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 17:46:50.417701   41490 kubeconfig.go:125] found "ha-174628" server: "https://192.168.39.254:8443"
	I0717 17:46:50.417729   41490 api_server.go:166] Checking apiserver status ...
	I0717 17:46:50.417771   41490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 17:46:50.431773   41490 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1373/cgroup
	W0717 17:46:50.441991   41490 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1373/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 17:46:50.442035   41490 ssh_runner.go:195] Run: ls
	I0717 17:46:50.446187   41490 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0717 17:46:50.450337   41490 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0717 17:46:50.450360   41490 status.go:422] ha-174628-m02 apiserver status = Running (err=<nil>)
	I0717 17:46:50.450368   41490 status.go:257] ha-174628-m02 status: &{Name:ha-174628-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 17:46:50.450381   41490 status.go:255] checking status of ha-174628-m04 ...
	I0717 17:46:50.450657   41490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:46:50.450705   41490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:46:50.465079   41490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37105
	I0717 17:46:50.465507   41490 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:46:50.465964   41490 main.go:141] libmachine: Using API Version  1
	I0717 17:46:50.465982   41490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:46:50.466229   41490 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:46:50.466388   41490 main.go:141] libmachine: (ha-174628-m04) Calling .GetState
	I0717 17:46:50.467853   41490 status.go:330] ha-174628-m04 host status = "Running" (err=<nil>)
	I0717 17:46:50.467880   41490 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:46:50.468183   41490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:46:50.468218   41490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:46:50.482925   41490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0717 17:46:50.483320   41490 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:46:50.483802   41490 main.go:141] libmachine: Using API Version  1
	I0717 17:46:50.483823   41490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:46:50.484151   41490 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:46:50.484356   41490 main.go:141] libmachine: (ha-174628-m04) Calling .GetIP
	I0717 17:46:50.487056   41490 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:46:50.487395   41490 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:44:17 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:46:50.487421   41490 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:46:50.487564   41490 host.go:66] Checking if "ha-174628-m04" exists ...
	I0717 17:46:50.487871   41490 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:46:50.487907   41490 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:46:50.502967   41490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38445
	I0717 17:46:50.503380   41490 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:46:50.503789   41490 main.go:141] libmachine: Using API Version  1
	I0717 17:46:50.503809   41490 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:46:50.504111   41490 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:46:50.504297   41490 main.go:141] libmachine: (ha-174628-m04) Calling .DriverName
	I0717 17:46:50.504485   41490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 17:46:50.504506   41490 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHHostname
	I0717 17:46:50.506999   41490 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:46:50.507448   41490 main.go:141] libmachine: (ha-174628-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:be:c6", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:44:17 +0000 UTC Type:0 Mac:52:54:00:81:be:c6 Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:ha-174628-m04 Clientid:01:52:54:00:81:be:c6}
	I0717 17:46:50.507484   41490 main.go:141] libmachine: (ha-174628-m04) DBG | domain ha-174628-m04 has defined IP address 192.168.39.161 and MAC address 52:54:00:81:be:c6 in network mk-ha-174628
	I0717 17:46:50.507633   41490 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHPort
	I0717 17:46:50.507793   41490 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHKeyPath
	I0717 17:46:50.507954   41490 main.go:141] libmachine: (ha-174628-m04) Calling .GetSSHUsername
	I0717 17:46:50.508092   41490 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628-m04/id_rsa Username:docker}
	W0717 17:47:08.901135   41490 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.161:22: connect: no route to host
	W0717 17:47:08.901226   41490 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	E0717 17:47:08.901239   41490 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host
	I0717 17:47:08.901246   41490 status.go:257] ha-174628-m04 status: &{Name:ha-174628-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0717 17:47:08.901262   41490 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.161:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-174628 -n ha-174628
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-174628 logs -n 25: (1.542387165s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-174628 ssh -n ha-174628-m02 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m03_ha-174628-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04:/home/docker/cp-test_ha-174628-m03_ha-174628-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m04 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m03_ha-174628-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-174628 cp testdata/cp-test.txt                                                | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3227756898/001/cp-test_ha-174628-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628:/home/docker/cp-test_ha-174628-m04_ha-174628.txt                       |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628 sudo cat                                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m04_ha-174628.txt                                 |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m02:/home/docker/cp-test_ha-174628-m04_ha-174628-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m02 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m04_ha-174628-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m03:/home/docker/cp-test_ha-174628-m04_ha-174628-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n                                                                 | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | ha-174628-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-174628 ssh -n ha-174628-m03 sudo cat                                          | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC | 17 Jul 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-174628-m04_ha-174628-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-174628 node stop m02 -v=7                                                     | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-174628 node start m02 -v=7                                                    | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-174628 -v=7                                                           | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-174628 -v=7                                                                | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-174628 --wait=true -v=7                                                    | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:40 UTC | 17 Jul 24 17:44 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-174628                                                                | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:44 UTC |                     |
	| node    | ha-174628 node delete m03 -v=7                                                   | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:44 UTC | 17 Jul 24 17:44 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-174628 stop -v=7                                                              | ha-174628 | jenkins | v1.33.1 | 17 Jul 24 17:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 17:40:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 17:40:21.429340   39305 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:40:21.429465   39305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:40:21.429474   39305 out.go:304] Setting ErrFile to fd 2...
	I0717 17:40:21.429479   39305 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:40:21.429657   39305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:40:21.430186   39305 out.go:298] Setting JSON to false
	I0717 17:40:21.431115   39305 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4964,"bootTime":1721233057,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 17:40:21.431167   39305 start.go:139] virtualization: kvm guest
	I0717 17:40:21.433582   39305 out.go:177] * [ha-174628] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 17:40:21.434961   39305 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 17:40:21.435001   39305 notify.go:220] Checking for updates...
	I0717 17:40:21.437384   39305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 17:40:21.438638   39305 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:40:21.440054   39305 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:40:21.441347   39305 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 17:40:21.442519   39305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 17:40:21.444142   39305 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:40:21.444217   39305 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 17:40:21.444672   39305 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:40:21.444731   39305 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:40:21.459431   39305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0717 17:40:21.459766   39305 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:40:21.460267   39305 main.go:141] libmachine: Using API Version  1
	I0717 17:40:21.460288   39305 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:40:21.460689   39305 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:40:21.460858   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:40:21.497053   39305 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 17:40:21.498314   39305 start.go:297] selected driver: kvm2
	I0717 17:40:21.498337   39305 start.go:901] validating driver "kvm2" against &{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:40:21.498517   39305 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 17:40:21.498973   39305 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:40:21.499075   39305 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 17:40:21.513489   39305 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 17:40:21.514152   39305 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 17:40:21.514219   39305 cni.go:84] Creating CNI manager for ""
	I0717 17:40:21.514232   39305 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 17:40:21.514286   39305 start.go:340] cluster config:
	{Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:40:21.514407   39305 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:40:21.516121   39305 out.go:177] * Starting "ha-174628" primary control-plane node in "ha-174628" cluster
	I0717 17:40:21.517394   39305 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:40:21.517428   39305 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 17:40:21.517437   39305 cache.go:56] Caching tarball of preloaded images
	I0717 17:40:21.517503   39305 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 17:40:21.517513   39305 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 17:40:21.517633   39305 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/config.json ...
	I0717 17:40:21.517835   39305 start.go:360] acquireMachinesLock for ha-174628: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 17:40:21.517902   39305 start.go:364] duration metric: took 42.256µs to acquireMachinesLock for "ha-174628"
	I0717 17:40:21.517922   39305 start.go:96] Skipping create...Using existing machine configuration
	I0717 17:40:21.517928   39305 fix.go:54] fixHost starting: 
	I0717 17:40:21.518260   39305 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:40:21.518297   39305 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:40:21.531581   39305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39671
	I0717 17:40:21.532034   39305 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:40:21.532567   39305 main.go:141] libmachine: Using API Version  1
	I0717 17:40:21.532593   39305 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:40:21.532881   39305 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:40:21.533066   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:40:21.533236   39305 main.go:141] libmachine: (ha-174628) Calling .GetState
	I0717 17:40:21.534638   39305 fix.go:112] recreateIfNeeded on ha-174628: state=Running err=<nil>
	W0717 17:40:21.534668   39305 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 17:40:21.536385   39305 out.go:177] * Updating the running kvm2 "ha-174628" VM ...
	I0717 17:40:21.537760   39305 machine.go:94] provisionDockerMachine start ...
	I0717 17:40:21.537781   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:40:21.537956   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:40:21.540080   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.540565   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:21.540598   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.540766   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:40:21.540965   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:21.541108   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:21.541228   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:40:21.541400   39305 main.go:141] libmachine: Using SSH client type: native
	I0717 17:40:21.541583   39305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:40:21.541594   39305 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 17:40:21.641830   39305 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174628
	
	I0717 17:40:21.641858   39305 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:40:21.642092   39305 buildroot.go:166] provisioning hostname "ha-174628"
	I0717 17:40:21.642113   39305 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:40:21.642277   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:40:21.644725   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.645135   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:21.645160   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.645310   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:40:21.645495   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:21.645651   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:21.645820   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:40:21.645961   39305 main.go:141] libmachine: Using SSH client type: native
	I0717 17:40:21.646118   39305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:40:21.646129   39305 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-174628 && echo "ha-174628" | sudo tee /etc/hostname
	I0717 17:40:21.759554   39305 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-174628
	
	I0717 17:40:21.759599   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:40:21.762189   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.762597   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:21.762634   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.762779   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:40:21.762965   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:21.763131   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:21.763238   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:40:21.763408   39305 main.go:141] libmachine: Using SSH client type: native
	I0717 17:40:21.763614   39305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:40:21.763636   39305 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-174628' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-174628/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-174628' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 17:40:21.865553   39305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 17:40:21.865575   39305 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 17:40:21.865597   39305 buildroot.go:174] setting up certificates
	I0717 17:40:21.865606   39305 provision.go:84] configureAuth start
	I0717 17:40:21.865615   39305 main.go:141] libmachine: (ha-174628) Calling .GetMachineName
	I0717 17:40:21.865866   39305 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:40:21.868270   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.868676   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:21.868703   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.868853   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:40:21.870893   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.871213   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:21.871237   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:21.871345   39305 provision.go:143] copyHostCerts
	I0717 17:40:21.871390   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:40:21.871424   39305 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 17:40:21.871435   39305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 17:40:21.871501   39305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 17:40:21.871617   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:40:21.871644   39305 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 17:40:21.871651   39305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 17:40:21.871677   39305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 17:40:21.871720   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:40:21.871735   39305 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 17:40:21.871741   39305 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 17:40:21.871763   39305 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 17:40:21.871826   39305 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.ha-174628 san=[127.0.0.1 192.168.39.100 ha-174628 localhost minikube]
	I0717 17:40:22.013479   39305 provision.go:177] copyRemoteCerts
	I0717 17:40:22.013558   39305 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 17:40:22.013592   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:40:22.016141   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:22.016519   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:22.016553   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:22.016784   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:40:22.016989   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:22.017131   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:40:22.017278   39305 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:40:22.095020   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 17:40:22.095089   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0717 17:40:22.123795   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 17:40:22.123899   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 17:40:22.147207   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 17:40:22.147292   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 17:40:22.171062   39305 provision.go:87] duration metric: took 305.44263ms to configureAuth
	I0717 17:40:22.171093   39305 buildroot.go:189] setting minikube options for container-runtime
	I0717 17:40:22.171319   39305 config.go:182] Loaded profile config "ha-174628": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:40:22.171389   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:40:22.173692   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:22.174035   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:40:22.174065   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:40:22.174186   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:40:22.174413   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:22.174597   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:40:22.174734   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:40:22.174907   39305 main.go:141] libmachine: Using SSH client type: native
	I0717 17:40:22.175058   39305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:40:22.175072   39305 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 17:41:52.904070   39305 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 17:41:52.904094   39305 machine.go:97] duration metric: took 1m31.366318438s to provisionDockerMachine
	I0717 17:41:52.904107   39305 start.go:293] postStartSetup for "ha-174628" (driver="kvm2")
	I0717 17:41:52.904132   39305 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 17:41:52.904150   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:41:52.904476   39305 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 17:41:52.904505   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:41:52.907417   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:52.907881   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:52.907905   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:52.908066   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:41:52.908249   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:41:52.908411   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:41:52.908536   39305 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:41:52.987942   39305 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 17:41:52.991746   39305 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 17:41:52.991764   39305 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 17:41:52.991823   39305 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 17:41:52.991911   39305 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 17:41:52.991922   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /etc/ssl/certs/215772.pem
	I0717 17:41:52.992019   39305 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 17:41:53.000606   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:41:53.022706   39305 start.go:296] duration metric: took 118.585939ms for postStartSetup
	I0717 17:41:53.022751   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:41:53.023101   39305 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0717 17:41:53.023153   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:41:53.025805   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.026253   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:53.026273   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.026498   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:41:53.026709   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:41:53.026901   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:41:53.027095   39305 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	W0717 17:41:53.106580   39305 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0717 17:41:53.106603   39305 fix.go:56] duration metric: took 1m31.588674359s for fixHost
	I0717 17:41:53.106638   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:41:53.109514   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.109929   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:53.109954   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.110097   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:41:53.110293   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:41:53.110511   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:41:53.110702   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:41:53.110922   39305 main.go:141] libmachine: Using SSH client type: native
	I0717 17:41:53.111128   39305 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0717 17:41:53.111143   39305 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 17:41:53.209302   39305 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721238113.163965415
	
	I0717 17:41:53.209329   39305 fix.go:216] guest clock: 1721238113.163965415
	I0717 17:41:53.209335   39305 fix.go:229] Guest: 2024-07-17 17:41:53.163965415 +0000 UTC Remote: 2024-07-17 17:41:53.106614193 +0000 UTC m=+91.711299656 (delta=57.351222ms)
	I0717 17:41:53.209354   39305 fix.go:200] guest clock delta is within tolerance: 57.351222ms
	I0717 17:41:53.209360   39305 start.go:83] releasing machines lock for "ha-174628", held for 1m31.691444595s
	I0717 17:41:53.209383   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:41:53.209625   39305 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:41:53.212614   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.213001   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:53.213030   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.213134   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:41:53.213625   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:41:53.213783   39305 main.go:141] libmachine: (ha-174628) Calling .DriverName
	I0717 17:41:53.213874   39305 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 17:41:53.213909   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:41:53.214001   39305 ssh_runner.go:195] Run: cat /version.json
	I0717 17:41:53.214030   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHHostname
	I0717 17:41:53.216630   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.216964   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.217008   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:53.217048   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.217167   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:41:53.217365   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:41:53.217517   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:53.217544   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:41:53.217558   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:53.217699   39305 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:41:53.217786   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHPort
	I0717 17:41:53.217948   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHKeyPath
	I0717 17:41:53.218092   39305 main.go:141] libmachine: (ha-174628) Calling .GetSSHUsername
	I0717 17:41:53.218249   39305 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/ha-174628/id_rsa Username:docker}
	I0717 17:41:53.325810   39305 ssh_runner.go:195] Run: systemctl --version
	I0717 17:41:53.331869   39305 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 17:41:53.494531   39305 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 17:41:53.500010   39305 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 17:41:53.500065   39305 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 17:41:53.508712   39305 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 17:41:53.508731   39305 start.go:495] detecting cgroup driver to use...
	I0717 17:41:53.508787   39305 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 17:41:53.525080   39305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 17:41:53.537720   39305 docker.go:217] disabling cri-docker service (if available) ...
	I0717 17:41:53.537772   39305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 17:41:53.550848   39305 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 17:41:53.563389   39305 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 17:41:53.701502   39305 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 17:41:53.844205   39305 docker.go:233] disabling docker service ...
	I0717 17:41:53.844277   39305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 17:41:53.860500   39305 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 17:41:53.873397   39305 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 17:41:54.018369   39305 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 17:41:54.183207   39305 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 17:41:54.196687   39305 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 17:41:54.214724   39305 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 17:41:54.214799   39305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.225224   39305 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 17:41:54.225294   39305 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.236487   39305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.246685   39305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.256623   39305 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 17:41:54.266198   39305 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.275446   39305 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.285306   39305 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 17:41:54.294644   39305 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 17:41:54.303183   39305 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 17:41:54.311581   39305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:41:54.445238   39305 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 17:41:54.700492   39305 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 17:41:54.700562   39305 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 17:41:54.705170   39305 start.go:563] Will wait 60s for crictl version
	I0717 17:41:54.705213   39305 ssh_runner.go:195] Run: which crictl
	I0717 17:41:54.708468   39305 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 17:41:54.750802   39305 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 17:41:54.750910   39305 ssh_runner.go:195] Run: crio --version
	I0717 17:41:54.782994   39305 ssh_runner.go:195] Run: crio --version
	I0717 17:41:54.811002   39305 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 17:41:54.812104   39305 main.go:141] libmachine: (ha-174628) Calling .GetIP
	I0717 17:41:54.814403   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:54.814785   39305 main.go:141] libmachine: (ha-174628) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:44:49", ip: ""} in network mk-ha-174628: {Iface:virbr1 ExpiryTime:2024-07-17 18:29:29 +0000 UTC Type:0 Mac:52:54:00:2f:44:49 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:ha-174628 Clientid:01:52:54:00:2f:44:49}
	I0717 17:41:54.814819   39305 main.go:141] libmachine: (ha-174628) DBG | domain ha-174628 has defined IP address 192.168.39.100 and MAC address 52:54:00:2f:44:49 in network mk-ha-174628
	I0717 17:41:54.815025   39305 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 17:41:54.819415   39305 kubeadm.go:883] updating cluster {Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 17:41:54.819550   39305 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:41:54.819601   39305 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 17:41:54.861708   39305 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 17:41:54.861730   39305 crio.go:433] Images already preloaded, skipping extraction
	I0717 17:41:54.861787   39305 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 17:41:54.895148   39305 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 17:41:54.895190   39305 cache_images.go:84] Images are preloaded, skipping loading
	I0717 17:41:54.895202   39305 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.30.2 crio true true} ...
	I0717 17:41:54.895464   39305 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-174628 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 17:41:54.895809   39305 ssh_runner.go:195] Run: crio config
	I0717 17:41:54.949147   39305 cni.go:84] Creating CNI manager for ""
	I0717 17:41:54.949164   39305 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0717 17:41:54.949176   39305 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 17:41:54.949201   39305 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-174628 NodeName:ha-174628 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 17:41:54.949356   39305 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-174628"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 17:41:54.949383   39305 kube-vip.go:115] generating kube-vip config ...
	I0717 17:41:54.949424   39305 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0717 17:41:54.960586   39305 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0717 17:41:54.960680   39305 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0717 17:41:54.960739   39305 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 17:41:54.969577   39305 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 17:41:54.969629   39305 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0717 17:41:54.978318   39305 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0717 17:41:54.993669   39305 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 17:41:55.008958   39305 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 17:41:55.023889   39305 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0717 17:41:55.039005   39305 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0717 17:41:55.043213   39305 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 17:41:55.183937   39305 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 17:41:55.198271   39305 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628 for IP: 192.168.39.100
	I0717 17:41:55.198299   39305 certs.go:194] generating shared ca certs ...
	I0717 17:41:55.198327   39305 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:41:55.198478   39305 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 17:41:55.198522   39305 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 17:41:55.198532   39305 certs.go:256] generating profile certs ...
	I0717 17:41:55.198607   39305 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/client.key
	I0717 17:41:55.198633   39305 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.df14862d
	I0717 17:41:55.198647   39305 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.df14862d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.100 192.168.39.97 192.168.39.187 192.168.39.254]
	I0717 17:41:55.296660   39305 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.df14862d ...
	I0717 17:41:55.296688   39305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.df14862d: {Name:mkec4f7fab86bbcc849b125ea863b5b4331e7f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:41:55.296845   39305 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.df14862d ...
	I0717 17:41:55.296856   39305 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.df14862d: {Name:mkea4da757864a30889a26df0dd583fc93fc2fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 17:41:55.296922   39305 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt.df14862d -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt
	I0717 17:41:55.297081   39305 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key.df14862d -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key
	I0717 17:41:55.297201   39305 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key
	I0717 17:41:55.297215   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 17:41:55.297226   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 17:41:55.297237   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 17:41:55.297249   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 17:41:55.297259   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 17:41:55.297271   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 17:41:55.297296   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 17:41:55.297322   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 17:41:55.297372   39305 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 17:41:55.297400   39305 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 17:41:55.297409   39305 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 17:41:55.297429   39305 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 17:41:55.297449   39305 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 17:41:55.297471   39305 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 17:41:55.297505   39305 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 17:41:55.297531   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /usr/share/ca-certificates/215772.pem
	I0717 17:41:55.297547   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:41:55.297558   39305 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem -> /usr/share/ca-certificates/21577.pem
	I0717 17:41:55.298114   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 17:41:55.322608   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 17:41:55.344511   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 17:41:55.366063   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 17:41:55.387020   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 17:41:55.409022   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 17:41:55.430672   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 17:41:55.452898   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/ha-174628/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 17:41:55.474640   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 17:41:55.495749   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 17:41:55.516526   39305 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 17:41:55.537201   39305 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 17:41:55.552230   39305 ssh_runner.go:195] Run: openssl version
	I0717 17:41:55.558056   39305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 17:41:55.567987   39305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 17:41:55.572108   39305 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 17:41:55.572166   39305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 17:41:55.577361   39305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 17:41:55.585885   39305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 17:41:55.595609   39305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 17:41:55.599479   39305 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 17:41:55.599523   39305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 17:41:55.604587   39305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 17:41:55.612961   39305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 17:41:55.622461   39305 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:41:55.626496   39305 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:41:55.626541   39305 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 17:41:55.631484   39305 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 17:41:55.639755   39305 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 17:41:55.643806   39305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 17:41:55.648975   39305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 17:41:55.654328   39305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 17:41:55.659522   39305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 17:41:55.664323   39305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 17:41:55.669298   39305 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 17:41:55.674284   39305 kubeadm.go:392] StartCluster: {Name:ha-174628 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Clust
erName:ha-174628 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.97 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.161 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:41:55.674413   39305 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 17:41:55.674452   39305 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 17:41:55.709330   39305 cri.go:89] found id: "bac1830ddc0ce3bb283dc0ff8ea48a22f58663f35dc0d244d9f38455c1a0d26d"
	I0717 17:41:55.709354   39305 cri.go:89] found id: "f57484a0f36cb0e3be2259b95fa649943aa4e1a3dc1cf2e88fbd1e4aae633a65"
	I0717 17:41:55.709360   39305 cri.go:89] found id: "4c2a82d5779c30132aa024c001d6b11525959eaf1e17d978f6a60cf60c14ea2e"
	I0717 17:41:55.709365   39305 cri.go:89] found id: "e8e4922ea1eac7b61df3c5c3284c361f60b0cbb9299b480529b43872a061b780"
	I0717 17:41:55.709369   39305 cri.go:89] found id: "976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9"
	I0717 17:41:55.709373   39305 cri.go:89] found id: "97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb"
	I0717 17:41:55.709379   39305 cri.go:89] found id: "2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0"
	I0717 17:41:55.709383   39305 cri.go:89] found id: "d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78"
	I0717 17:41:55.709387   39305 cri.go:89] found id: "370441d5e9e25be3ceff0e96f53875a159099004aa797d2570be4e3e61aa9e59"
	I0717 17:41:55.709393   39305 cri.go:89] found id: "e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147"
	I0717 17:41:55.709409   39305 cri.go:89] found id: "889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9"
	I0717 17:41:55.709416   39305 cri.go:89] found id: "9880796029aa2ee7897660b3ccd40a039526e26c4b0208d087876a8ed4a6e3dd"
	I0717 17:41:55.709421   39305 cri.go:89] found id: "dbb0842f9354fc3963cae2902decd174b028cb857227fdf23844b2da6a7c01ac"
	I0717 17:41:55.709428   39305 cri.go:89] found id: ""
	I0717 17:41:55.709477   39305 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.460171462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721238429460145866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f75a6b5a-b9b0-4753-9d17-3ff7a4695235 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.460913922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c57339b2-17ad-4e0a-8e66-ba73b43c75e1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.460985148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c57339b2-17ad-4e0a-8e66-ba73b43c75e1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.461442782Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6c478540f2d235d413baa4d4b1eb115afe319327ba78736a44196ea41de7ad9,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721238169205488492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c844fa26b05ab402b5550aaf261619fd0941934823e012a82a8ef73c185a6f5a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721238164205844831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731dfd6c523fb7b9024ae6d32cdb21435f506403f647995fd463c05da6ca3883,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721238161221363556,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa1c2154a9a4fe39461bed1d21fb6a362b654583e5f79e01e7f0c3c1391993,PodSandboxId:94b647eeb1369e4493dc85bcf955b2befed32d2521170c361a8a9b5399948e6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721238154435466676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1536d680fdb6c0b0a97dfde782f97e1b635bb7ba734b54c90f5a637e6121b403,PodSandboxId:c88634d156297e8cffe86a0c16661b1b84d415c5d7d9c9dd1752434ff17dc477,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721238133419777863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6ba04a85bfbff5a957b7732c295eff,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1cce1c506e03a9c4fbe0f8d38792493de36070eb5cdd03a5cedf085c157a6d,PodSandboxId:3662e32676f84a5c133443791f0a3a0f8f72902b220c453ec8112f3d3cd1d292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721238121368195649,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:52d1cab66f48cbd8674b8f411ab80487389fb12b4710edc84607d1ef666b676a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721238121194388588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:cb8f9753c4a91113f4d19fb976afdc57ba90f879488ae102acb94522b4753834,PodSandboxId:ce3f0eae7af4b8c50f907a2439475bdaebec3b8543d2009b15a632a25fbfd3c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121266226656,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f83ba4f05083b01b85083344f1ceece3524c9ed469106ad62b56da508a5126d,PodSandboxId:a8eeb7452b65fe4a38b12f3f38590d258f7d09b65da4fb816c76b123352d2531,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121201747500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a32601104ebf78674d54d79144e835165efe3710687b6a61a6e11009905acd,PodSandboxId:11ad8e8d3095fed789474425b90c741241c2a47734b7d6d0ba5816f7742455e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721238121252764480,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aeb0a6d29b3db9397dbbe275b13b7c97ca27bb9e4805af79da925fbad61b1af,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721238121120781163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967342f385c9ab30f017f6226ebc0dd6e6f535d7abf22a5884c63765726387b1,PodSandboxId:eaf3b42369089d6ae0fa4f237fb70670b8c932dbb0828d6a377455465781939c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721238121047011528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-a
ca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0e4ff14e576853890e93a9a6a937dd70d94bc7822374634f11e460ae6b3749,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721238120916030757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d
-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0606a3ecaf44ed6f342c1b254dc982cccb31c0296258c76f4f5f18927216ea47,PodSandboxId:402fa28dbe97771bbb2f28fb97e12f7a31e3495a304ab85f4caefa22e16d24e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721238120999347906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annot
ations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721237628009986530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421982456232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kuberne
tes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421928772505,Labels:map[string]string{io.kubernetes.container.name: coredn
s,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721237410216101906,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721237406540064620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721237387075455569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721237387046822608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c57339b2-17ad-4e0a-8e66-ba73b43c75e1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.501259954Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5dfaf82d-a424-40e8-b545-514f379fe70c name=/runtime.v1.RuntimeService/Version
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.501349010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5dfaf82d-a424-40e8-b545-514f379fe70c name=/runtime.v1.RuntimeService/Version
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.502329273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e369ae1a-bf50-498f-9846-68a745886f37 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.502979371Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721238429502951492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e369ae1a-bf50-498f-9846-68a745886f37 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.503654230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=945af23f-1a5e-4507-8454-80c8b23b61d2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.503773593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=945af23f-1a5e-4507-8454-80c8b23b61d2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.504182746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6c478540f2d235d413baa4d4b1eb115afe319327ba78736a44196ea41de7ad9,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721238169205488492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c844fa26b05ab402b5550aaf261619fd0941934823e012a82a8ef73c185a6f5a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721238164205844831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731dfd6c523fb7b9024ae6d32cdb21435f506403f647995fd463c05da6ca3883,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721238161221363556,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa1c2154a9a4fe39461bed1d21fb6a362b654583e5f79e01e7f0c3c1391993,PodSandboxId:94b647eeb1369e4493dc85bcf955b2befed32d2521170c361a8a9b5399948e6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721238154435466676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1536d680fdb6c0b0a97dfde782f97e1b635bb7ba734b54c90f5a637e6121b403,PodSandboxId:c88634d156297e8cffe86a0c16661b1b84d415c5d7d9c9dd1752434ff17dc477,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721238133419777863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6ba04a85bfbff5a957b7732c295eff,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1cce1c506e03a9c4fbe0f8d38792493de36070eb5cdd03a5cedf085c157a6d,PodSandboxId:3662e32676f84a5c133443791f0a3a0f8f72902b220c453ec8112f3d3cd1d292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721238121368195649,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:52d1cab66f48cbd8674b8f411ab80487389fb12b4710edc84607d1ef666b676a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721238121194388588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:cb8f9753c4a91113f4d19fb976afdc57ba90f879488ae102acb94522b4753834,PodSandboxId:ce3f0eae7af4b8c50f907a2439475bdaebec3b8543d2009b15a632a25fbfd3c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121266226656,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f83ba4f05083b01b85083344f1ceece3524c9ed469106ad62b56da508a5126d,PodSandboxId:a8eeb7452b65fe4a38b12f3f38590d258f7d09b65da4fb816c76b123352d2531,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121201747500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a32601104ebf78674d54d79144e835165efe3710687b6a61a6e11009905acd,PodSandboxId:11ad8e8d3095fed789474425b90c741241c2a47734b7d6d0ba5816f7742455e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721238121252764480,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aeb0a6d29b3db9397dbbe275b13b7c97ca27bb9e4805af79da925fbad61b1af,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721238121120781163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967342f385c9ab30f017f6226ebc0dd6e6f535d7abf22a5884c63765726387b1,PodSandboxId:eaf3b42369089d6ae0fa4f237fb70670b8c932dbb0828d6a377455465781939c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721238121047011528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-a
ca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0e4ff14e576853890e93a9a6a937dd70d94bc7822374634f11e460ae6b3749,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721238120916030757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d
-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0606a3ecaf44ed6f342c1b254dc982cccb31c0296258c76f4f5f18927216ea47,PodSandboxId:402fa28dbe97771bbb2f28fb97e12f7a31e3495a304ab85f4caefa22e16d24e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721238120999347906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annot
ations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721237628009986530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421982456232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kuberne
tes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421928772505,Labels:map[string]string{io.kubernetes.container.name: coredn
s,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721237410216101906,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721237406540064620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721237387075455569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721237387046822608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=945af23f-1a5e-4507-8454-80c8b23b61d2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.542567769Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7cde56d6-88be-48a8-9b5c-399f9cceea87 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.542654179Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7cde56d6-88be-48a8-9b5c-399f9cceea87 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.543945701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b19a5b0-2040-4dad-a185-1db0ad45b140 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.544366294Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721238429544345543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b19a5b0-2040-4dad-a185-1db0ad45b140 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.544793823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c0b5fcb-8af6-4d62-97b0-e58aa2be0616 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.544863659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c0b5fcb-8af6-4d62-97b0-e58aa2be0616 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.545253160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6c478540f2d235d413baa4d4b1eb115afe319327ba78736a44196ea41de7ad9,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721238169205488492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c844fa26b05ab402b5550aaf261619fd0941934823e012a82a8ef73c185a6f5a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721238164205844831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731dfd6c523fb7b9024ae6d32cdb21435f506403f647995fd463c05da6ca3883,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721238161221363556,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa1c2154a9a4fe39461bed1d21fb6a362b654583e5f79e01e7f0c3c1391993,PodSandboxId:94b647eeb1369e4493dc85bcf955b2befed32d2521170c361a8a9b5399948e6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721238154435466676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1536d680fdb6c0b0a97dfde782f97e1b635bb7ba734b54c90f5a637e6121b403,PodSandboxId:c88634d156297e8cffe86a0c16661b1b84d415c5d7d9c9dd1752434ff17dc477,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721238133419777863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6ba04a85bfbff5a957b7732c295eff,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1cce1c506e03a9c4fbe0f8d38792493de36070eb5cdd03a5cedf085c157a6d,PodSandboxId:3662e32676f84a5c133443791f0a3a0f8f72902b220c453ec8112f3d3cd1d292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721238121368195649,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:52d1cab66f48cbd8674b8f411ab80487389fb12b4710edc84607d1ef666b676a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721238121194388588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:cb8f9753c4a91113f4d19fb976afdc57ba90f879488ae102acb94522b4753834,PodSandboxId:ce3f0eae7af4b8c50f907a2439475bdaebec3b8543d2009b15a632a25fbfd3c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121266226656,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f83ba4f05083b01b85083344f1ceece3524c9ed469106ad62b56da508a5126d,PodSandboxId:a8eeb7452b65fe4a38b12f3f38590d258f7d09b65da4fb816c76b123352d2531,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121201747500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a32601104ebf78674d54d79144e835165efe3710687b6a61a6e11009905acd,PodSandboxId:11ad8e8d3095fed789474425b90c741241c2a47734b7d6d0ba5816f7742455e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721238121252764480,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aeb0a6d29b3db9397dbbe275b13b7c97ca27bb9e4805af79da925fbad61b1af,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721238121120781163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967342f385c9ab30f017f6226ebc0dd6e6f535d7abf22a5884c63765726387b1,PodSandboxId:eaf3b42369089d6ae0fa4f237fb70670b8c932dbb0828d6a377455465781939c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721238121047011528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-a
ca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0e4ff14e576853890e93a9a6a937dd70d94bc7822374634f11e460ae6b3749,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721238120916030757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d
-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0606a3ecaf44ed6f342c1b254dc982cccb31c0296258c76f4f5f18927216ea47,PodSandboxId:402fa28dbe97771bbb2f28fb97e12f7a31e3495a304ab85f4caefa22e16d24e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721238120999347906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annot
ations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721237628009986530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421982456232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kuberne
tes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421928772505,Labels:map[string]string{io.kubernetes.container.name: coredn
s,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721237410216101906,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721237406540064620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721237387075455569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721237387046822608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c0b5fcb-8af6-4d62-97b0-e58aa2be0616 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.583394249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10675849-eb59-4113-aa8d-6799d0838c73 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.583605805Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10675849-eb59-4113-aa8d-6799d0838c73 name=/runtime.v1.RuntimeService/Version
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.584776600Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a771dfc-7dc0-4c96-a2c8-fb0b3a85c098 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.585235609Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721238429585213816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154767,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a771dfc-7dc0-4c96-a2c8-fb0b3a85c098 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.585644288Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02c3a296-f91c-4b1a-b41f-10f04cc95619 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.585765979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02c3a296-f91c-4b1a-b41f-10f04cc95619 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 17:47:09 ha-174628 crio[3765]: time="2024-07-17 17:47:09.586172749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b6c478540f2d235d413baa4d4b1eb115afe319327ba78736a44196ea41de7ad9,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721238169205488492,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c844fa26b05ab402b5550aaf261619fd0941934823e012a82a8ef73c185a6f5a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721238164205844831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731dfd6c523fb7b9024ae6d32cdb21435f506403f647995fd463c05da6ca3883,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721238161221363556,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19fa1c2154a9a4fe39461bed1d21fb6a362b654583e5f79e01e7f0c3c1391993,PodSandboxId:94b647eeb1369e4493dc85bcf955b2befed32d2521170c361a8a9b5399948e6e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721238154435466676,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotations:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1536d680fdb6c0b0a97dfde782f97e1b635bb7ba734b54c90f5a637e6121b403,PodSandboxId:c88634d156297e8cffe86a0c16661b1b84d415c5d7d9c9dd1752434ff17dc477,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721238133419777863,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6ba04a85bfbff5a957b7732c295eff,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolic
y: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef1cce1c506e03a9c4fbe0f8d38792493de36070eb5cdd03a5cedf085c157a6d,PodSandboxId:3662e32676f84a5c133443791f0a3a0f8f72902b220c453ec8112f3d3cd1d292,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721238121368195649,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:52d1cab66f48cbd8674b8f411ab80487389fb12b4710edc84607d1ef666b676a,PodSandboxId:af17cf9fe4c4440ee1fe91571807a31ccd044ee640fce23302e4723dd8b37344,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721238121194388588,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc801341b913ca6bb6e3fd73c9182232,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:cb8f9753c4a91113f4d19fb976afdc57ba90f879488ae102acb94522b4753834,PodSandboxId:ce3f0eae7af4b8c50f907a2439475bdaebec3b8543d2009b15a632a25fbfd3c3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121266226656,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f83ba4f05083b01b85083344f1ceece3524c9ed469106ad62b56da508a5126d,PodSandboxId:a8eeb7452b65fe4a38b12f3f38590d258f7d09b65da4fb816c76b123352d2531,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721238121201747500,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kubernetes.container.hash: a79fc9d3,io.kubernetes.contai
ner.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a32601104ebf78674d54d79144e835165efe3710687b6a61a6e11009905acd,PodSandboxId:11ad8e8d3095fed789474425b90c741241c2a47734b7d6d0ba5816f7742455e5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721238121252764480,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5aeb0a6d29b3db9397dbbe275b13b7c97ca27bb9e4805af79da925fbad61b1af,PodSandboxId:69b8050f37d85d8dad2e585d43f0c13114482d2d364ed451178a76fac7310ba3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721238121120781163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-174628,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: de7a365d4a82da636f5e615f6e397e41,},Annotations:map[string]string{io.kubernetes.container.hash: 87829c6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:967342f385c9ab30f017f6226ebc0dd6e6f535d7abf22a5884c63765726387b1,PodSandboxId:eaf3b42369089d6ae0fa4f237fb70670b8c932dbb0828d6a377455465781939c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721238121047011528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-a
ca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0e4ff14e576853890e93a9a6a937dd70d94bc7822374634f11e460ae6b3749,PodSandboxId:716f5357f3d2ec94c87dc1d590f2f36bf46bec33b4a5349b5fdabb940e762aaf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721238120916030757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0601bb-36f6-434d-8e9d
-1e326bf682f5,},Annotations:map[string]string{io.kubernetes.container.hash: c5796c23,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0606a3ecaf44ed6f342c1b254dc982cccb31c0296258c76f4f5f18927216ea47,PodSandboxId:402fa28dbe97771bbb2f28fb97e12f7a31e3495a304ab85f4caefa22e16d24e5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721238120999347906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annot
ations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ba3b0cb31056a097c546ca8141ac7564e6022cadb85edf29ba47557a51733d,PodSandboxId:c4d7c5b8a369b3ca7e96adc39aead8151091b963180b08a1b2ef2b4245ec48cb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721237628009986530,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-8zv26,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fe9c4738-6334-4fc5-b8a3-dc249512fa0a,},Annotati
ons:map[string]string{io.kubernetes.container.hash: c9fdecea,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9,PodSandboxId:6732d32de6a25fb20f393a32e59086415ffdac958b4ea3ecc08d87b546e14b8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421982456232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ljjl7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4857a1-6ccd-4122-80b5-f5bcfd2e307f,},Annotations:map[string]string{io.kuberne
tes.container.hash: a79fc9d3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb,PodSandboxId:9ca7e3b66f8e6bdb05a92075bc83783c24a28fc0ea9a232500bb3138c8f42c31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721237421928772505,Labels:map[string]string{io.kubernetes.container.name: coredn
s,io.kubernetes.pod.name: coredns-7db6d8ff4d-nb567,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1739ac64-be05-4438-9a8f-a0d2821a1650,},Annotations:map[string]string{io.kubernetes.container.hash: 26dcfbd3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0,PodSandboxId:db21995c3cb316562cf180bae778c5133896e24cd5116b45bed640afd42af3d1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721237410216101906,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-k6jnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bca93ed-aca5-4540-990c-d9e6209d12d0,},Annotations:map[string]string{io.kubernetes.container.hash: a563e631,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78,PodSandboxId:4b7a03b7f681c44808713ebd8f6e508890f50fbad11596e759b16e68b1337b49,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721237406540064620,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fqf9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74d57a9-38a2-464d-991f-fc8905fdbe3f,},Annotations:map[string]string{io.kubernetes.container.hash: d92182a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147,PodSandboxId:d488537da13816a8c9df8fa19c34d4a179b9a213b02d7d94d9d0669dba286d9c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06278
8eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721237387075455569,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb8260866404ea84b14c26f81effc219,},Annotations:map[string]string{io.kubernetes.container.hash: 682daa08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9,PodSandboxId:4c7f495eb3d6ad87875ce5d24179f4c1ecf0ac3a30f4c284773543fd4dd21ddc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,
State:CONTAINER_EXITED,CreatedAt:1721237387046822608,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-174628,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57815d244795c90550b97bbf781e6e77,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02c3a296-f91c-4b1a-b41f-10f04cc95619 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6c478540f2d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   716f5357f3d2e       storage-provisioner
	c844fa26b05ab       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago       Running             kube-controller-manager   2                   af17cf9fe4c44       kube-controller-manager-ha-174628
	731dfd6c523fb       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago       Running             kube-apiserver            3                   69b8050f37d85       kube-apiserver-ha-174628
	19fa1c2154a9a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   94b647eeb1369       busybox-fc5497c4f-8zv26
	1536d680fdb6c       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   c88634d156297       kube-vip-ha-174628
	ef1cce1c506e0       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      5 minutes ago       Running             kube-proxy                1                   3662e32676f84       kube-proxy-fqf9q
	cb8f9753c4a91       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   ce3f0eae7af4b       coredns-7db6d8ff4d-nb567
	06a32601104eb       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   11ad8e8d3095f       etcd-ha-174628
	6f83ba4f05083       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   a8eeb7452b65f       coredns-7db6d8ff4d-ljjl7
	52d1cab66f48c       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      5 minutes ago       Exited              kube-controller-manager   1                   af17cf9fe4c44       kube-controller-manager-ha-174628
	5aeb0a6d29b3d       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      5 minutes ago       Exited              kube-apiserver            2                   69b8050f37d85       kube-apiserver-ha-174628
	967342f385c9a       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      5 minutes ago       Running             kindnet-cni               1                   eaf3b42369089       kindnet-k6jnp
	0606a3ecaf44e       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      5 minutes ago       Running             kube-scheduler            1                   402fa28dbe977       kube-scheduler-ha-174628
	6d0e4ff14e576       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   716f5357f3d2e       storage-provisioner
	88ba3b0cb3105       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   c4d7c5b8a369b       busybox-fc5497c4f-8zv26
	976aeedd4a51e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   6732d32de6a25       coredns-7db6d8ff4d-ljjl7
	97987539971dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   9ca7e3b66f8e6       coredns-7db6d8ff4d-nb567
	2fefa59bf46cd       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    16 minutes ago      Exited              kindnet-cni               0                   db21995c3cb31       kindnet-k6jnp
	d139046cefa3a       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      17 minutes ago      Exited              kube-proxy                0                   4b7a03b7f681c       kube-proxy-fqf9q
	e1c91b7db4ab1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   d488537da1381       etcd-ha-174628
	889d28a83e85b       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      17 minutes ago      Exited              kube-scheduler            0                   4c7f495eb3d6a       kube-scheduler-ha-174628
	
	
	==> coredns [6f83ba4f05083b01b85083344f1ceece3524c9ed469106ad62b56da508a5126d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:34304->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1729886385]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:42:13.196) (total time: 10058ms):
	Trace[1729886385]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:34304->10.96.0.1:443: read: connection reset by peer 10058ms (17:42:23.255)
	Trace[1729886385]: [10.058588339s] [10.058588339s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:34304->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:34334->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:34334->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [976aeedd4a51eeb05fcfbac860254d72b66106761829b6c832d51de7a839c2f9] <==
	[INFO] 10.244.0.4:42628 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001405769s
	[INFO] 10.244.0.4:53106 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132475s
	[INFO] 10.244.1.2:56143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010532s
	[INFO] 10.244.1.2:57864 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093166s
	[INFO] 10.244.1.2:36333 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127244s
	[INFO] 10.244.1.2:59545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001305574s
	[INFO] 10.244.1.2:38967 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068655s
	[INFO] 10.244.2.2:42756 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113607s
	[INFO] 10.244.2.2:43563 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069199s
	[INFO] 10.244.0.4:59480 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109399s
	[INFO] 10.244.0.4:42046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068182s
	[INFO] 10.244.0.4:52729 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087202s
	[INFO] 10.244.1.2:54148 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000075008s
	[INFO] 10.244.2.2:34613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101677s
	[INFO] 10.244.2.2:34221 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203479s
	[INFO] 10.244.0.4:35705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000081127s
	[INFO] 10.244.0.4:36734 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090761s
	[INFO] 10.244.1.2:34328 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093559s
	[INFO] 10.244.1.2:39930 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000149652s
	[INFO] 10.244.1.2:55584 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000101975s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [97987539971ddf211d9bc183b6ea334075a3e9d4ff601c16121b74f07375c3eb] <==
	[INFO] 10.244.1.2:51622 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00157319s
	[INFO] 10.244.2.2:60810 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001843s
	[INFO] 10.244.2.2:59317 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00028437s
	[INFO] 10.244.2.2:38028 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000131271s
	[INFO] 10.244.0.4:34076 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000171504s
	[INFO] 10.244.0.4:47718 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126429s
	[INFO] 10.244.1.2:45110 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001972368s
	[INFO] 10.244.1.2:56072 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000151997s
	[INFO] 10.244.1.2:56149 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091586s
	[INFO] 10.244.2.2:58101 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116587s
	[INFO] 10.244.2.2:38105 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059217s
	[INFO] 10.244.0.4:33680 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067251s
	[INFO] 10.244.1.2:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149516s
	[INFO] 10.244.1.2:49668 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000120356s
	[INFO] 10.244.1.2:39442 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065763s
	[INFO] 10.244.2.2:49955 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116571s
	[INFO] 10.244.2.2:46651 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00013941s
	[INFO] 10.244.0.4:39128 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097533s
	[INFO] 10.244.0.4:36840 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000042262s
	[INFO] 10.244.1.2:36575 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084857s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cb8f9753c4a91113f4d19fb976afdc57ba90f879488ae102acb94522b4753834] <==
	Trace[1915939027]: [10.00088773s] [10.00088773s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:41532->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:41532->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41540->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[389132317]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Jul-2024 17:42:16.236) (total time: 10069ms):
	Trace[389132317]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41540->10.96.0.1:443: read: connection reset by peer 10069ms (17:42:26.306)
	Trace[389132317]: [10.069490426s] [10.069490426s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:41540->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-174628
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T17_29_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:29:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:47:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:42:42 +0000   Wed, 17 Jul 2024 17:29:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:42:42 +0000   Wed, 17 Jul 2024 17:29:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:42:42 +0000   Wed, 17 Jul 2024 17:29:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:42:42 +0000   Wed, 17 Jul 2024 17:30:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    ha-174628
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 38d679c72879470c96b5b9e9677b521d
	  System UUID:                38d679c7-2879-470c-96b5-b9e9677b521d
	  Boot ID:                    dc99f06a-b6ac-4ceb-b149-a41be92c5af1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8zv26              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-ljjl7             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-nb567             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-174628                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-k6jnp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-174628             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-174628    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-fqf9q                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-174628             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-174628                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 4m25s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-174628 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-174628 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-174628 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-174628 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Warning  ContainerGCFailed        5m16s (x2 over 6m16s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Normal   RegisteredNode           4m13s                  node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	  Normal   RegisteredNode           3m4s                   node-controller  Node ha-174628 event: Registered Node ha-174628 in Controller
	
	
	Name:               ha-174628-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T17_32_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:32:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:47:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 17:43:55 +0000   Wed, 17 Jul 2024 17:42:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 17:43:55 +0000   Wed, 17 Jul 2024 17:42:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 17:43:55 +0000   Wed, 17 Jul 2024 17:42:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 17:43:55 +0000   Wed, 17 Jul 2024 17:42:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.97
	  Hostname:    ha-174628-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 903b989e686a4ab6b3e3c3b6b498bfac
	  System UUID:                903b989e-686a-4ab6-b3e3-c3b6b498bfac
	  Boot ID:                    67d94d54-1e0a-423d-8e6e-512d0032972e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ftgzz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-174628-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-79txz                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-174628-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-174628-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-7lchn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-174628-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-174628-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m58s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-174628-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-174628-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-174628-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-174628-m02 status is now: NodeNotReady
	  Normal  Starting                 4m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m55s (x8 over 4m55s)  kubelet          Node ha-174628-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s (x8 over 4m55s)  kubelet          Node ha-174628-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s (x7 over 4m55s)  kubelet          Node ha-174628-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	  Normal  RegisteredNode           3m5s                   node-controller  Node ha-174628-m02 event: Registered Node ha-174628-m02 in Controller
	
	
	Name:               ha-174628-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-174628-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=ha-174628
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T17_34_18_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:34:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-174628-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 17:44:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 17:44:23 +0000   Wed, 17 Jul 2024 17:45:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 17:44:23 +0000   Wed, 17 Jul 2024 17:45:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 17:44:23 +0000   Wed, 17 Jul 2024 17:45:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 17:44:23 +0000   Wed, 17 Jul 2024 17:45:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.161
	  Hostname:    ha-174628-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1beb916d1ab94a9e97732204939d8f7c
	  System UUID:                1beb916d-1ab9-4a9e-9773-2204939d8f7c
	  Boot ID:                    9c57600f-4318-45f1-8bee-1e5facd32841
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xwf2b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-pt58p              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-gb548           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-174628-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-174628-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-174628-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-174628-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal   RegisteredNode           4m14s                  node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal   RegisteredNode           3m5s                   node-controller  Node ha-174628-m04 event: Registered Node ha-174628-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x3 over 2m47s)  kubelet          Node ha-174628-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x3 over 2m47s)  kubelet          Node ha-174628-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x3 over 2m47s)  kubelet          Node ha-174628-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s (x2 over 2m47s)  kubelet          Node ha-174628-m04 has been rebooted, boot id: 9c57600f-4318-45f1-8bee-1e5facd32841
	  Normal   NodeReady                2m47s (x2 over 2m47s)  kubelet          Node ha-174628-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s (x2 over 3m40s)   node-controller  Node ha-174628-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.259835] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.065539] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054802] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.175947] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.103995] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.251338] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.953322] systemd-fstab-generator[757]: Ignoring "noauto" option for root device
	[  +4.318399] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +0.059032] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.943760] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.083790] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.749430] kauditd_printk_skb: 18 callbacks suppressed
	[Jul17 17:30] kauditd_printk_skb: 38 callbacks suppressed
	[Jul17 17:32] kauditd_printk_skb: 26 callbacks suppressed
	[Jul17 17:38] kauditd_printk_skb: 1 callbacks suppressed
	[Jul17 17:41] systemd-fstab-generator[3683]: Ignoring "noauto" option for root device
	[  +0.138073] systemd-fstab-generator[3695]: Ignoring "noauto" option for root device
	[  +0.172600] systemd-fstab-generator[3709]: Ignoring "noauto" option for root device
	[  +0.161501] systemd-fstab-generator[3721]: Ignoring "noauto" option for root device
	[  +0.271525] systemd-fstab-generator[3749]: Ignoring "noauto" option for root device
	[  +0.728523] systemd-fstab-generator[3851]: Ignoring "noauto" option for root device
	[  +5.553998] kauditd_printk_skb: 122 callbacks suppressed
	[Jul17 17:42] kauditd_printk_skb: 85 callbacks suppressed
	[ +43.397968] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [06a32601104ebf78674d54d79144e835165efe3710687b6a61a6e11009905acd] <==
	{"level":"info","ts":"2024-07-17T17:43:46.501137Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:43:46.50919Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3276445ff8d31e34","to":"6dbdd402c8b44d8e","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-17T17:43:46.509314Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:43:46.511188Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3276445ff8d31e34","to":"6dbdd402c8b44d8e","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-17T17:43:46.51135Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"warn","ts":"2024-07-17T17:43:47.172078Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"6dbdd402c8b44d8e","rtt":"0s","error":"dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:43:47.172263Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"6dbdd402c8b44d8e","rtt":"0s","error":"dial tcp 192.168.39.187:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-17T17:44:35.92604Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.187:42722","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-07-17T17:44:35.940281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 switched to configuration voters=(3636168928135421492 14751460896940825542)"}
	{"level":"info","ts":"2024-07-17T17:44:35.942536Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","removed-remote-peer-id":"6dbdd402c8b44d8e","removed-remote-peer-urls":["https://192.168.39.187:2380"]}
	{"level":"info","ts":"2024-07-17T17:44:35.942644Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"warn","ts":"2024-07-17T17:44:35.942797Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:44:35.942837Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"warn","ts":"2024-07-17T17:44:35.942944Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:44:35.942982Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:44:35.943174Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"warn","ts":"2024-07-17T17:44:35.943358Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e","error":"context canceled"}
	{"level":"warn","ts":"2024-07-17T17:44:35.943404Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"6dbdd402c8b44d8e","error":"failed to read 6dbdd402c8b44d8e on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-17T17:44:35.943438Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"warn","ts":"2024-07-17T17:44:35.943587Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e","error":"context canceled"}
	{"level":"info","ts":"2024-07-17T17:44:35.943625Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:44:35.943642Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:44:35.943653Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"3276445ff8d31e34","removed-remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"warn","ts":"2024-07-17T17:44:35.965415Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"3276445ff8d31e34","remote-peer-id-stream-handler":"3276445ff8d31e34","remote-peer-id-from":"6dbdd402c8b44d8e"}
	{"level":"warn","ts":"2024-07-17T17:44:35.965565Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"3276445ff8d31e34","remote-peer-id-stream-handler":"3276445ff8d31e34","remote-peer-id-from":"6dbdd402c8b44d8e"}
	
	
	==> etcd [e1c91b7db4ab19020052e950f50fe166ca4a5b6e4b2894c919b690bb561b9147] <==
	2024/07/17 17:40:22 WARNING: [core] [Server #9] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T17:40:22.292614Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"833.91074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-17T17:40:22.301553Z","caller":"traceutil/trace.go:171","msg":"trace[1782621430] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; }","duration":"843.026193ms","start":"2024-07-17T17:40:21.458521Z","end":"2024-07-17T17:40:22.301548Z","steps":["trace[1782621430] 'agreement among raft nodes before linearized reading'  (duration: 834.088765ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:40:22.301595Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-17T17:40:21.458514Z","time spent":"843.07347ms","remote":"127.0.0.1:46628","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":0,"response size":0,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" limit:10000 "}
	2024/07/17 17:40:22 WARNING: [core] [Server #9] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-17T17:40:22.434956Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T17:40:22.435008Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T17:40:22.436477Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3276445ff8d31e34","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-17T17:40:22.436767Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.436811Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.436855Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.436959Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.437014Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.437063Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.437091Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ccb7b78778391bc6"}
	{"level":"info","ts":"2024-07-17T17:40:22.437114Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.43714Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.437179Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.437259Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.43732Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.437368Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3276445ff8d31e34","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.437395Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"6dbdd402c8b44d8e"}
	{"level":"info","ts":"2024-07-17T17:40:22.440008Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-07-17T17:40:22.440113Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-07-17T17:40:22.440135Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-174628","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	
	
	==> kernel <==
	 17:47:10 up 17 min,  0 users,  load average: 0.07, 0.35, 0.28
	Linux ha-174628 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2fefa59bf46cdc50a42273321071f9fde7193b7095037954c20475d84ad24fc0] <==
	I0717 17:40:01.261382       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:40:01.261401       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:40:01.261568       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:40:01.261590       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:40:01.261729       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:40:01.261780       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	E0717 17:40:10.369232       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1947&timeout=6m0s&timeoutSeconds=360&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=5, ErrCode=NO_ERROR, debug=""
	I0717 17:40:11.258112       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:40:11.259039       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:40:11.259293       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:40:11.259321       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:40:11.259391       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:40:11.259410       1 main.go:303] handling current node
	I0717 17:40:11.259435       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:40:11.259452       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	W0717 17:40:20.470774       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	E0717 17:40:20.471203       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	I0717 17:40:21.258757       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:40:21.258847       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:40:21.259087       1 main.go:299] Handling node with IPs: map[192.168.39.187:{}]
	I0717 17:40:21.259143       1 main.go:326] Node ha-174628-m03 has CIDR [10.244.2.0/24] 
	I0717 17:40:21.259219       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:40:21.259240       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:40:21.259301       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:40:21.259327       1 main.go:303] handling current node
	
	
	==> kindnet [967342f385c9ab30f017f6226ebc0dd6e6f535d7abf22a5884c63765726387b1] <==
	I0717 17:46:22.086774       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:46:32.088847       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:46:32.088934       1 main.go:303] handling current node
	I0717 17:46:32.088972       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:46:32.088978       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:46:32.089182       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:46:32.089203       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:46:42.089761       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:46:42.089883       1 main.go:303] handling current node
	I0717 17:46:42.089910       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:46:42.089942       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:46:42.090132       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:46:42.090155       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:46:52.086228       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:46:52.086269       1 main.go:303] handling current node
	I0717 17:46:52.086283       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:46:52.086288       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:46:52.086498       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:46:52.086533       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	I0717 17:47:02.080491       1 main.go:299] Handling node with IPs: map[192.168.39.100:{}]
	I0717 17:47:02.080610       1 main.go:303] handling current node
	I0717 17:47:02.080641       1 main.go:299] Handling node with IPs: map[192.168.39.97:{}]
	I0717 17:47:02.080712       1 main.go:326] Node ha-174628-m02 has CIDR [10.244.1.0/24] 
	I0717 17:47:02.080900       1 main.go:299] Handling node with IPs: map[192.168.39.161:{}]
	I0717 17:47:02.080948       1 main.go:326] Node ha-174628-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5aeb0a6d29b3db9397dbbe275b13b7c97ca27bb9e4805af79da925fbad61b1af] <==
	I0717 17:42:01.603289       1 options.go:221] external host was not specified, using 192.168.39.100
	I0717 17:42:01.607520       1 server.go:148] Version: v1.30.2
	I0717 17:42:01.607623       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:42:02.239127       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0717 17:42:02.248280       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0717 17:42:02.248313       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0717 17:42:02.248477       1 instance.go:299] Using reconciler: lease
	I0717 17:42:02.249029       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0717 17:42:22.237174       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0717 17:42:22.238197       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0717 17:42:22.249841       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0717 17:42:22.249844       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [731dfd6c523fb7b9024ae6d32cdb21435f506403f647995fd463c05da6ca3883] <==
	I0717 17:42:43.475377       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0717 17:42:43.557796       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 17:42:43.559104       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 17:42:43.559133       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 17:42:43.559269       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 17:42:43.559545       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 17:42:43.561986       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 17:42:43.568719       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0717 17:42:43.574605       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.187 192.168.39.97]
	I0717 17:42:43.575950       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 17:42:43.576129       1 aggregator.go:165] initial CRD sync complete...
	I0717 17:42:43.576180       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 17:42:43.576206       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 17:42:43.576231       1 cache.go:39] Caches are synced for autoregister controller
	I0717 17:42:43.582005       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 17:42:43.591938       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 17:42:43.591958       1 policy_source.go:224] refreshing policies
	I0717 17:42:43.657968       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 17:42:43.676180       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 17:42:43.686073       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0717 17:42:43.689417       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0717 17:42:44.468749       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0717 17:42:45.013622       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.187 192.168.39.97]
	W0717 17:42:55.010584       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.97]
	W0717 17:44:55.013002       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100 192.168.39.97]
	
	
	==> kube-controller-manager [52d1cab66f48cbd8674b8f411ab80487389fb12b4710edc84607d1ef666b676a] <==
	I0717 17:42:02.795212       1 serving.go:380] Generated self-signed cert in-memory
	I0717 17:42:03.017741       1 controllermanager.go:189] "Starting" version="v1.30.2"
	I0717 17:42:03.017778       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:42:03.019260       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 17:42:03.019429       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0717 17:42:03.019506       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 17:42:03.019790       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0717 17:42:23.257017       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.100:8443/healthz\": dial tcp 192.168.39.100:8443: connect: connection refused"
	
	
	==> kube-controller-manager [c844fa26b05ab402b5550aaf261619fd0941934823e012a82a8ef73c185a6f5a] <==
	I0717 17:44:32.746127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="142.76199ms"
	I0717 17:44:32.831496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.236717ms"
	I0717 17:44:32.875396       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.619277ms"
	I0717 17:44:32.875545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.83µs"
	I0717 17:44:33.010468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.314µs"
	I0717 17:44:34.824145       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="124.746µs"
	I0717 17:44:35.399588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.721µs"
	I0717 17:44:35.425736       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="144.065µs"
	I0717 17:44:35.435726       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.181µs"
	I0717 17:44:36.224970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.363214ms"
	I0717 17:44:36.227841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.506µs"
	I0717 17:44:47.527790       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-174628-m04"
	E0717 17:44:47.572417       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"storage.k8s.io/v1", Kind:"CSINode", Name:"ha-174628-m03", UID:"a2eedd24-193f-429c-bc28-f301ab759d88", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_
:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-174628-m03", UID:"e337618d-44b3-471e-8d7f-061663ab9b32", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io "ha-174628-m03" not found
	E0717 17:44:56.697052       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174628-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174628-m03"
	E0717 17:44:56.697179       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174628-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174628-m03"
	E0717 17:44:56.697206       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174628-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174628-m03"
	E0717 17:44:56.697239       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174628-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174628-m03"
	E0717 17:44:56.697265       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174628-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174628-m03"
	E0717 17:45:16.698213       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174628-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174628-m03"
	E0717 17:45:16.698323       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174628-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174628-m03"
	E0717 17:45:16.698332       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174628-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174628-m03"
	E0717 17:45:16.698337       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174628-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174628-m03"
	E0717 17:45:16.698347       1 gc_controller.go:153] "Failed to get node" err="node \"ha-174628-m03\" not found" logger="pod-garbage-collector-controller" node="ha-174628-m03"
	I0717 17:45:25.363790       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.495392ms"
	I0717 17:45:25.364926       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="128.042µs"
	
	
	==> kube-proxy [d139046cefa3a15b52bb859abb66b75b8897b78cdbb1e0c1651fcc39f6c5fc78] <==
	E0717 17:39:12.066154       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:15.137076       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:15.137129       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:15.137197       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:15.137211       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:15.137275       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:15.137382       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:21.282476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:21.282708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:21.282494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:21.282753       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:21.282889       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:21.282992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:30.498010       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:30.498284       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:33.570053       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:33.570458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:33.570512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:33.570472       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:58.146189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:58.146389       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-174628&resourceVersion=1969": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:58.146263       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:58.146512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1893": dial tcp 192.168.39.254:8443: connect: no route to host
	W0717 17:39:58.146330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0717 17:39:58.146549       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [ef1cce1c506e03a9c4fbe0f8d38792493de36070eb5cdd03a5cedf085c157a6d] <==
	I0717 17:42:02.638020       1 server_linux.go:69] "Using iptables proxy"
	E0717 17:42:04.097177       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174628\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 17:42:07.170005       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174628\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 17:42:10.241936       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174628\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 17:42:16.387081       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174628\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0717 17:42:25.602197       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-174628\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0717 17:42:44.279608       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0717 17:42:44.385416       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:42:44.385493       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:42:44.385514       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:42:44.387822       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:42:44.388044       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:42:44.388070       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:42:44.389616       1 config.go:192] "Starting service config controller"
	I0717 17:42:44.389687       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:42:44.389740       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:42:44.389758       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:42:44.390380       1 config.go:319] "Starting node config controller"
	I0717 17:42:44.390439       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:42:44.490757       1 shared_informer.go:320] Caches are synced for node config
	I0717 17:42:44.490820       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:42:44.490850       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0606a3ecaf44ed6f342c1b254dc982cccb31c0296258c76f4f5f18927216ea47] <==
	W0717 17:42:38.168739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.100:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:38.168807       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.100:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:38.635500       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.100:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:38.635607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.100:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:38.731565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.100:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:38.731719       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.100:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:38.975577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.100:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:38.975653       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.100:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:39.406984       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:39.407088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:39.611600       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:39.611780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:40.052319       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:40.052423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:40.652860       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.100:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:40.652983       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.100:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:41.043736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.100:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:41.043868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.100:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:41.243289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	E0717 17:42:41.243327       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.100:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.100:8443: connect: connection refused
	W0717 17:42:43.485375       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 17:42:43.485427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 17:42:43.485511       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:42:43.485540       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0717 17:43:03.162737       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [889d28a83e85b4b7fb62278bf3cabcddf822b97aa8f93bace0286fe1e83acfe9] <==
	W0717 17:40:17.790129       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 17:40:17.790291       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 17:40:18.294745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 17:40:18.294915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 17:40:18.308036       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 17:40:18.308167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 17:40:18.588842       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 17:40:18.589014       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 17:40:18.840315       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 17:40:18.840391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 17:40:18.898586       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 17:40:18.898713       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 17:40:19.086464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:40:19.086560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:40:19.097080       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 17:40:19.097166       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 17:40:19.125539       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 17:40:19.125571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 17:40:19.271993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 17:40:19.272113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 17:40:19.372021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:40:19.372107       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:40:19.578961       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:40:19.579004       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:40:22.259775       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 17 17:42:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:42:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:42:53 ha-174628 kubelet[1358]: I0717 17:42:53.252352    1358 scope.go:117] "RemoveContainer" containerID="4c2a82d5779c30132aa024c001d6b11525959eaf1e17d978f6a60cf60c14ea2e"
	Jul 17 17:43:45 ha-174628 kubelet[1358]: I0717 17:43:45.193472    1358 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-174628" podUID="b2d62768-e68e-4ce3-ad84-31ddac00688e"
	Jul 17 17:43:45 ha-174628 kubelet[1358]: I0717 17:43:45.213724    1358 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-174628"
	Jul 17 17:43:53 ha-174628 kubelet[1358]: E0717 17:43:53.219557    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:43:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:43:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:43:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:43:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:44:53 ha-174628 kubelet[1358]: E0717 17:44:53.208532    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:44:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:44:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:44:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:44:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:45:53 ha-174628 kubelet[1358]: E0717 17:45:53.208609    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:45:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:45:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:45:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:45:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 17:46:53 ha-174628 kubelet[1358]: E0717 17:46:53.208180    1358 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 17:46:53 ha-174628 kubelet[1358]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 17:46:53 ha-174628 kubelet[1358]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 17:46:53 ha-174628 kubelet[1358]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 17:46:53 ha-174628 kubelet[1358]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 17:47:09.209446   41651 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19283-14386/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-174628 -n ha-174628
helpers_test.go:261: (dbg) Run:  kubectl --context ha-174628 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (325.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-866205
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-866205
E0717 18:03:21.395840   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-866205: exit status 82 (2m1.755670774s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-866205-m03"  ...
	* Stopping node "multinode-866205-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-866205" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-866205 --wait=true -v=8 --alsologtostderr
E0717 18:05:41.791630   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 18:06:24.442157   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-866205 --wait=true -v=8 --alsologtostderr: (3m21.530151219s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-866205
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-866205 -n multinode-866205
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-866205 logs -n 25: (1.394474216s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-866205 cp multinode-866205-m02:/home/docker/cp-test.txt                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1415765283/001/cp-test_multinode-866205-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-866205 cp multinode-866205-m02:/home/docker/cp-test.txt                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205:/home/docker/cp-test_multinode-866205-m02_multinode-866205.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n multinode-866205 sudo cat                                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | /home/docker/cp-test_multinode-866205-m02_multinode-866205.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-866205 cp multinode-866205-m02:/home/docker/cp-test.txt                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m03:/home/docker/cp-test_multinode-866205-m02_multinode-866205-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n multinode-866205-m03 sudo cat                                   | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | /home/docker/cp-test_multinode-866205-m02_multinode-866205-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-866205 cp testdata/cp-test.txt                                                | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-866205 cp multinode-866205-m03:/home/docker/cp-test.txt                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1415765283/001/cp-test_multinode-866205-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-866205 cp multinode-866205-m03:/home/docker/cp-test.txt                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205:/home/docker/cp-test_multinode-866205-m03_multinode-866205.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n multinode-866205 sudo cat                                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | /home/docker/cp-test_multinode-866205-m03_multinode-866205.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-866205 cp multinode-866205-m03:/home/docker/cp-test.txt                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m02:/home/docker/cp-test_multinode-866205-m03_multinode-866205-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n multinode-866205-m02 sudo cat                                   | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | /home/docker/cp-test_multinode-866205-m03_multinode-866205-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-866205 node stop m03                                                          | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	| node    | multinode-866205 node start                                                             | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:01 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-866205                                                                | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:01 UTC |                     |
	| stop    | -p multinode-866205                                                                     | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:01 UTC |                     |
	| start   | -p multinode-866205                                                                     | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:03 UTC | 17 Jul 24 18:06 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-866205                                                                | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:06 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:03:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:03:32.846683   50854 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:03:32.846928   50854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:03:32.846936   50854 out.go:304] Setting ErrFile to fd 2...
	I0717 18:03:32.846940   50854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:03:32.847120   50854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:03:32.847622   50854 out.go:298] Setting JSON to false
	I0717 18:03:32.848505   50854 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6356,"bootTime":1721233057,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:03:32.848566   50854 start.go:139] virtualization: kvm guest
	I0717 18:03:32.850891   50854 out.go:177] * [multinode-866205] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:03:32.852331   50854 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:03:32.852349   50854 notify.go:220] Checking for updates...
	I0717 18:03:32.854843   50854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:03:32.856079   50854 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:03:32.857102   50854 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:03:32.858358   50854 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:03:32.859592   50854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:03:32.861195   50854 config.go:182] Loaded profile config "multinode-866205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:03:32.861290   50854 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:03:32.861710   50854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:03:32.861753   50854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:03:32.877874   50854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0717 18:03:32.878350   50854 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:03:32.878957   50854 main.go:141] libmachine: Using API Version  1
	I0717 18:03:32.878977   50854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:03:32.879275   50854 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:03:32.879446   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:03:32.914763   50854 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:03:32.916013   50854 start.go:297] selected driver: kvm2
	I0717 18:03:32.916027   50854 start.go:901] validating driver "kvm2" against &{Name:multinode-866205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-866205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:03:32.916144   50854 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:03:32.916439   50854 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:03:32.916496   50854 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:03:32.930492   50854 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:03:32.931164   50854 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:03:32.931193   50854 cni.go:84] Creating CNI manager for ""
	I0717 18:03:32.931199   50854 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 18:03:32.931251   50854 start.go:340] cluster config:
	{Name:multinode-866205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-866205 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:03:32.931359   50854 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:03:32.933111   50854 out.go:177] * Starting "multinode-866205" primary control-plane node in "multinode-866205" cluster
	I0717 18:03:32.934404   50854 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:03:32.934440   50854 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:03:32.934449   50854 cache.go:56] Caching tarball of preloaded images
	I0717 18:03:32.934528   50854 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:03:32.934542   50854 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:03:32.934649   50854 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/config.json ...
	I0717 18:03:32.934829   50854 start.go:360] acquireMachinesLock for multinode-866205: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:03:32.934866   50854 start.go:364] duration metric: took 21.228µs to acquireMachinesLock for "multinode-866205"
	I0717 18:03:32.934884   50854 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:03:32.934891   50854 fix.go:54] fixHost starting: 
	I0717 18:03:32.935129   50854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:03:32.935163   50854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:03:32.948991   50854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0717 18:03:32.949427   50854 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:03:32.949849   50854 main.go:141] libmachine: Using API Version  1
	I0717 18:03:32.949868   50854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:03:32.950216   50854 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:03:32.950395   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:03:32.950558   50854 main.go:141] libmachine: (multinode-866205) Calling .GetState
	I0717 18:03:32.952038   50854 fix.go:112] recreateIfNeeded on multinode-866205: state=Running err=<nil>
	W0717 18:03:32.952053   50854 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:03:32.953768   50854 out.go:177] * Updating the running kvm2 "multinode-866205" VM ...
	I0717 18:03:32.954937   50854 machine.go:94] provisionDockerMachine start ...
	I0717 18:03:32.954953   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:03:32.955127   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:03:32.957394   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:32.957795   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:32.957817   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:32.957905   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:03:32.958060   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:32.958183   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:32.958312   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:03:32.958519   50854 main.go:141] libmachine: Using SSH client type: native
	I0717 18:03:32.958765   50854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0717 18:03:32.958780   50854 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:03:33.061762   50854 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-866205
	
	I0717 18:03:33.061783   50854 main.go:141] libmachine: (multinode-866205) Calling .GetMachineName
	I0717 18:03:33.062012   50854 buildroot.go:166] provisioning hostname "multinode-866205"
	I0717 18:03:33.062035   50854 main.go:141] libmachine: (multinode-866205) Calling .GetMachineName
	I0717 18:03:33.062245   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:03:33.064872   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.065203   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:33.065228   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.065390   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:03:33.065569   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.065737   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.065874   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:03:33.066019   50854 main.go:141] libmachine: Using SSH client type: native
	I0717 18:03:33.066218   50854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0717 18:03:33.066235   50854 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-866205 && echo "multinode-866205" | sudo tee /etc/hostname
	I0717 18:03:33.180298   50854 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-866205
	
	I0717 18:03:33.180344   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:03:33.183406   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.183869   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:33.183913   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.184097   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:03:33.184284   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.184436   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.184587   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:03:33.184786   50854 main.go:141] libmachine: Using SSH client type: native
	I0717 18:03:33.184981   50854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0717 18:03:33.185005   50854 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-866205' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-866205/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-866205' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:03:33.285561   50854 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:03:33.285596   50854 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:03:33.285615   50854 buildroot.go:174] setting up certificates
	I0717 18:03:33.285624   50854 provision.go:84] configureAuth start
	I0717 18:03:33.285633   50854 main.go:141] libmachine: (multinode-866205) Calling .GetMachineName
	I0717 18:03:33.286003   50854 main.go:141] libmachine: (multinode-866205) Calling .GetIP
	I0717 18:03:33.289046   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.289378   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:33.289398   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.289543   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:03:33.291689   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.292053   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:33.292079   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.292199   50854 provision.go:143] copyHostCerts
	I0717 18:03:33.292234   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:03:33.292269   50854 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:03:33.292282   50854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:03:33.292348   50854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:03:33.292440   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:03:33.292457   50854 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:03:33.292464   50854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:03:33.292487   50854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:03:33.292530   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:03:33.292545   50854 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:03:33.292557   50854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:03:33.292587   50854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:03:33.292629   50854 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.multinode-866205 san=[127.0.0.1 192.168.39.16 localhost minikube multinode-866205]
	I0717 18:03:33.425029   50854 provision.go:177] copyRemoteCerts
	I0717 18:03:33.425088   50854 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:03:33.425114   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:03:33.427637   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.427962   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:33.427984   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.428170   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:03:33.428365   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.428539   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:03:33.428686   50854 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/multinode-866205/id_rsa Username:docker}
	I0717 18:03:33.506665   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:03:33.506736   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:03:33.529793   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:03:33.529865   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 18:03:33.552889   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:03:33.552969   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:03:33.575857   50854 provision.go:87] duration metric: took 290.220074ms to configureAuth
	I0717 18:03:33.575889   50854 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:03:33.576096   50854 config.go:182] Loaded profile config "multinode-866205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:03:33.576163   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:03:33.578902   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.579261   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:33.579304   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.579535   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:03:33.579704   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.579885   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.580134   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:03:33.580344   50854 main.go:141] libmachine: Using SSH client type: native
	I0717 18:03:33.580541   50854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0717 18:03:33.580561   50854 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:05:04.232364   50854 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:05:04.232392   50854 machine.go:97] duration metric: took 1m31.277443119s to provisionDockerMachine
	I0717 18:05:04.232407   50854 start.go:293] postStartSetup for "multinode-866205" (driver="kvm2")
	I0717 18:05:04.232420   50854 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:05:04.232445   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:05:04.232742   50854 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:05:04.232768   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:05:04.235936   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.236331   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:05:04.236357   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.236476   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:05:04.236685   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:05:04.236854   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:05:04.237093   50854 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/multinode-866205/id_rsa Username:docker}
	I0717 18:05:04.315844   50854 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:05:04.319643   50854 command_runner.go:130] > NAME=Buildroot
	I0717 18:05:04.319665   50854 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 18:05:04.319671   50854 command_runner.go:130] > ID=buildroot
	I0717 18:05:04.319677   50854 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 18:05:04.319690   50854 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 18:05:04.319746   50854 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:05:04.319770   50854 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:05:04.319848   50854 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:05:04.319996   50854 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:05:04.320012   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /etc/ssl/certs/215772.pem
	I0717 18:05:04.320134   50854 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:05:04.329050   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:05:04.350949   50854 start.go:296] duration metric: took 118.52848ms for postStartSetup
	I0717 18:05:04.350995   50854 fix.go:56] duration metric: took 1m31.416102353s for fixHost
	I0717 18:05:04.351020   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:05:04.353635   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.353987   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:05:04.354023   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.354160   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:05:04.354366   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:05:04.354537   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:05:04.354663   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:05:04.354885   50854 main.go:141] libmachine: Using SSH client type: native
	I0717 18:05:04.355055   50854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0717 18:05:04.355067   50854 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:05:04.453197   50854 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721239504.425295517
	
	I0717 18:05:04.453222   50854 fix.go:216] guest clock: 1721239504.425295517
	I0717 18:05:04.453229   50854 fix.go:229] Guest: 2024-07-17 18:05:04.425295517 +0000 UTC Remote: 2024-07-17 18:05:04.351000553 +0000 UTC m=+91.537763001 (delta=74.294964ms)
	I0717 18:05:04.453246   50854 fix.go:200] guest clock delta is within tolerance: 74.294964ms
	I0717 18:05:04.453251   50854 start.go:83] releasing machines lock for "multinode-866205", held for 1m31.51837647s
	I0717 18:05:04.453268   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:05:04.453510   50854 main.go:141] libmachine: (multinode-866205) Calling .GetIP
	I0717 18:05:04.455802   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.456071   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:05:04.456102   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.456201   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:05:04.456681   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:05:04.456833   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:05:04.456894   50854 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:05:04.456967   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:05:04.457091   50854 ssh_runner.go:195] Run: cat /version.json
	I0717 18:05:04.457116   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:05:04.459355   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.459639   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.459671   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:05:04.459692   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.459866   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:05:04.460032   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:05:04.460050   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:05:04.460069   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.460171   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:05:04.460260   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:05:04.460292   50854 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/multinode-866205/id_rsa Username:docker}
	I0717 18:05:04.460393   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:05:04.460554   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:05:04.460683   50854 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/multinode-866205/id_rsa Username:docker}
	I0717 18:05:04.533733   50854 command_runner.go:130] > {"iso_version": "v1.33.1-1721146474-19264", "kicbase_version": "v0.0.44-1721064868-19249", "minikube_version": "v1.33.1", "commit": "6e0d7ef26437c947028f356d4449a323918e966e"}
	I0717 18:05:04.534102   50854 ssh_runner.go:195] Run: systemctl --version
	I0717 18:05:04.574027   50854 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 18:05:04.574592   50854 command_runner.go:130] > systemd 252 (252)
	I0717 18:05:04.574628   50854 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0717 18:05:04.574687   50854 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:05:04.728095   50854 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 18:05:04.735084   50854 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 18:05:04.735123   50854 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:05:04.735189   50854 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:05:04.743824   50854 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 18:05:04.743849   50854 start.go:495] detecting cgroup driver to use...
	I0717 18:05:04.743921   50854 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:05:04.759060   50854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:05:04.771897   50854 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:05:04.771946   50854 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:05:04.784463   50854 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:05:04.796772   50854 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:05:04.941031   50854 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:05:05.076071   50854 docker.go:233] disabling docker service ...
	I0717 18:05:05.076150   50854 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:05:05.091990   50854 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:05:05.104906   50854 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:05:05.237663   50854 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:05:05.377281   50854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:05:05.390355   50854 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:05:05.407510   50854 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 18:05:05.407560   50854 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:05:05.407610   50854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.417108   50854 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:05:05.417164   50854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.426481   50854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.435752   50854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.444979   50854 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:05:05.454690   50854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.464181   50854 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.474673   50854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.484900   50854 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:05:05.493596   50854 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 18:05:05.493651   50854 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:05:05.501828   50854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:05:05.635967   50854 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:05:11.249442   50854 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.613435488s)
	I0717 18:05:11.249476   50854 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:05:11.249530   50854 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:05:11.254143   50854 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 18:05:11.254168   50854 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 18:05:11.254177   50854 command_runner.go:130] > Device: 0,22	Inode: 1340        Links: 1
	I0717 18:05:11.254186   50854 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 18:05:11.254194   50854 command_runner.go:130] > Access: 2024-07-17 18:05:11.125925713 +0000
	I0717 18:05:11.254205   50854 command_runner.go:130] > Modify: 2024-07-17 18:05:11.125925713 +0000
	I0717 18:05:11.254212   50854 command_runner.go:130] > Change: 2024-07-17 18:05:11.125925713 +0000
	I0717 18:05:11.254219   50854 command_runner.go:130] >  Birth: -
	I0717 18:05:11.254234   50854 start.go:563] Will wait 60s for crictl version
	I0717 18:05:11.254270   50854 ssh_runner.go:195] Run: which crictl
	I0717 18:05:11.257582   50854 command_runner.go:130] > /usr/bin/crictl
	I0717 18:05:11.257718   50854 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:05:11.294443   50854 command_runner.go:130] > Version:  0.1.0
	I0717 18:05:11.294465   50854 command_runner.go:130] > RuntimeName:  cri-o
	I0717 18:05:11.294472   50854 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0717 18:05:11.294480   50854 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 18:05:11.294554   50854 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:05:11.294628   50854 ssh_runner.go:195] Run: crio --version
	I0717 18:05:11.323179   50854 command_runner.go:130] > crio version 1.29.1
	I0717 18:05:11.323196   50854 command_runner.go:130] > Version:        1.29.1
	I0717 18:05:11.323202   50854 command_runner.go:130] > GitCommit:      unknown
	I0717 18:05:11.323206   50854 command_runner.go:130] > GitCommitDate:  unknown
	I0717 18:05:11.323210   50854 command_runner.go:130] > GitTreeState:   clean
	I0717 18:05:11.323228   50854 command_runner.go:130] > BuildDate:      2024-07-16T21:25:55Z
	I0717 18:05:11.323233   50854 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 18:05:11.323236   50854 command_runner.go:130] > Compiler:       gc
	I0717 18:05:11.323241   50854 command_runner.go:130] > Platform:       linux/amd64
	I0717 18:05:11.323245   50854 command_runner.go:130] > Linkmode:       dynamic
	I0717 18:05:11.323250   50854 command_runner.go:130] > BuildTags:      
	I0717 18:05:11.323254   50854 command_runner.go:130] >   containers_image_ostree_stub
	I0717 18:05:11.323259   50854 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 18:05:11.323266   50854 command_runner.go:130] >   btrfs_noversion
	I0717 18:05:11.323272   50854 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 18:05:11.323279   50854 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 18:05:11.323284   50854 command_runner.go:130] >   seccomp
	I0717 18:05:11.323290   50854 command_runner.go:130] > LDFlags:          unknown
	I0717 18:05:11.323296   50854 command_runner.go:130] > SeccompEnabled:   true
	I0717 18:05:11.323302   50854 command_runner.go:130] > AppArmorEnabled:  false
	I0717 18:05:11.323424   50854 ssh_runner.go:195] Run: crio --version
	I0717 18:05:11.348857   50854 command_runner.go:130] > crio version 1.29.1
	I0717 18:05:11.348878   50854 command_runner.go:130] > Version:        1.29.1
	I0717 18:05:11.348884   50854 command_runner.go:130] > GitCommit:      unknown
	I0717 18:05:11.348889   50854 command_runner.go:130] > GitCommitDate:  unknown
	I0717 18:05:11.348893   50854 command_runner.go:130] > GitTreeState:   clean
	I0717 18:05:11.348898   50854 command_runner.go:130] > BuildDate:      2024-07-16T21:25:55Z
	I0717 18:05:11.348903   50854 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 18:05:11.348906   50854 command_runner.go:130] > Compiler:       gc
	I0717 18:05:11.348911   50854 command_runner.go:130] > Platform:       linux/amd64
	I0717 18:05:11.348916   50854 command_runner.go:130] > Linkmode:       dynamic
	I0717 18:05:11.348939   50854 command_runner.go:130] > BuildTags:      
	I0717 18:05:11.348962   50854 command_runner.go:130] >   containers_image_ostree_stub
	I0717 18:05:11.348969   50854 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 18:05:11.348976   50854 command_runner.go:130] >   btrfs_noversion
	I0717 18:05:11.348982   50854 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 18:05:11.348987   50854 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 18:05:11.348991   50854 command_runner.go:130] >   seccomp
	I0717 18:05:11.348995   50854 command_runner.go:130] > LDFlags:          unknown
	I0717 18:05:11.349002   50854 command_runner.go:130] > SeccompEnabled:   true
	I0717 18:05:11.349007   50854 command_runner.go:130] > AppArmorEnabled:  false
	I0717 18:05:11.351827   50854 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:05:11.353186   50854 main.go:141] libmachine: (multinode-866205) Calling .GetIP
	I0717 18:05:11.355812   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:11.356127   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:05:11.356149   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:11.356303   50854 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:05:11.360206   50854 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0717 18:05:11.360291   50854 kubeadm.go:883] updating cluster {Name:multinode-866205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-866205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:05:11.360483   50854 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:05:11.360531   50854 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:05:11.404035   50854 command_runner.go:130] > {
	I0717 18:05:11.404050   50854 command_runner.go:130] >   "images": [
	I0717 18:05:11.404054   50854 command_runner.go:130] >     {
	I0717 18:05:11.404061   50854 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 18:05:11.404070   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404076   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 18:05:11.404079   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404083   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404091   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 18:05:11.404100   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 18:05:11.404106   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404111   50854 command_runner.go:130] >       "size": "65908273",
	I0717 18:05:11.404118   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.404125   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404133   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404142   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404145   50854 command_runner.go:130] >     },
	I0717 18:05:11.404150   50854 command_runner.go:130] >     {
	I0717 18:05:11.404157   50854 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0717 18:05:11.404163   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404168   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0717 18:05:11.404172   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404175   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404183   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0717 18:05:11.404194   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0717 18:05:11.404200   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404209   50854 command_runner.go:130] >       "size": "87165492",
	I0717 18:05:11.404214   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.404224   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404234   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404241   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404250   50854 command_runner.go:130] >     },
	I0717 18:05:11.404254   50854 command_runner.go:130] >     {
	I0717 18:05:11.404261   50854 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 18:05:11.404265   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404270   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 18:05:11.404274   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404278   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404288   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 18:05:11.404294   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 18:05:11.404299   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404304   50854 command_runner.go:130] >       "size": "1363676",
	I0717 18:05:11.404308   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.404312   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404318   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404322   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404325   50854 command_runner.go:130] >     },
	I0717 18:05:11.404328   50854 command_runner.go:130] >     {
	I0717 18:05:11.404334   50854 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 18:05:11.404340   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404345   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 18:05:11.404351   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404354   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404368   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 18:05:11.404381   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 18:05:11.404386   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404390   50854 command_runner.go:130] >       "size": "31470524",
	I0717 18:05:11.404394   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.404398   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404405   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404409   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404413   50854 command_runner.go:130] >     },
	I0717 18:05:11.404417   50854 command_runner.go:130] >     {
	I0717 18:05:11.404425   50854 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 18:05:11.404430   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404436   50854 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 18:05:11.404440   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404443   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404450   50854 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 18:05:11.404459   50854 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 18:05:11.404462   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404469   50854 command_runner.go:130] >       "size": "61245718",
	I0717 18:05:11.404472   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.404476   50854 command_runner.go:130] >       "username": "nonroot",
	I0717 18:05:11.404481   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404485   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404492   50854 command_runner.go:130] >     },
	I0717 18:05:11.404495   50854 command_runner.go:130] >     {
	I0717 18:05:11.404501   50854 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 18:05:11.404505   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404510   50854 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 18:05:11.404515   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404519   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404526   50854 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 18:05:11.404535   50854 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 18:05:11.404538   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404542   50854 command_runner.go:130] >       "size": "150779692",
	I0717 18:05:11.404547   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.404551   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.404554   50854 command_runner.go:130] >       },
	I0717 18:05:11.404558   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404564   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404567   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404571   50854 command_runner.go:130] >     },
	I0717 18:05:11.404574   50854 command_runner.go:130] >     {
	I0717 18:05:11.404579   50854 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 18:05:11.404585   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404590   50854 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 18:05:11.404593   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404597   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404604   50854 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 18:05:11.404613   50854 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 18:05:11.404618   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404624   50854 command_runner.go:130] >       "size": "117609954",
	I0717 18:05:11.404630   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.404634   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.404639   50854 command_runner.go:130] >       },
	I0717 18:05:11.404643   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404650   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404654   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404660   50854 command_runner.go:130] >     },
	I0717 18:05:11.404663   50854 command_runner.go:130] >     {
	I0717 18:05:11.404671   50854 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 18:05:11.404677   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404682   50854 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 18:05:11.404686   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404691   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404704   50854 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 18:05:11.404714   50854 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 18:05:11.404720   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404724   50854 command_runner.go:130] >       "size": "112194888",
	I0717 18:05:11.404730   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.404734   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.404741   50854 command_runner.go:130] >       },
	I0717 18:05:11.404745   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404748   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404752   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404755   50854 command_runner.go:130] >     },
	I0717 18:05:11.404758   50854 command_runner.go:130] >     {
	I0717 18:05:11.404764   50854 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 18:05:11.404768   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404774   50854 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 18:05:11.404777   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404781   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404790   50854 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 18:05:11.404799   50854 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 18:05:11.404805   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404809   50854 command_runner.go:130] >       "size": "85953433",
	I0717 18:05:11.404814   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.404819   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404824   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404829   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404835   50854 command_runner.go:130] >     },
	I0717 18:05:11.404838   50854 command_runner.go:130] >     {
	I0717 18:05:11.404845   50854 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 18:05:11.404850   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404856   50854 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 18:05:11.404862   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404866   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404874   50854 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 18:05:11.404883   50854 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 18:05:11.404888   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404892   50854 command_runner.go:130] >       "size": "63051080",
	I0717 18:05:11.404898   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.404902   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.404908   50854 command_runner.go:130] >       },
	I0717 18:05:11.404912   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404918   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404921   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404927   50854 command_runner.go:130] >     },
	I0717 18:05:11.404930   50854 command_runner.go:130] >     {
	I0717 18:05:11.404938   50854 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 18:05:11.404952   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404957   50854 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 18:05:11.404961   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404964   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404973   50854 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 18:05:11.404981   50854 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 18:05:11.404986   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404990   50854 command_runner.go:130] >       "size": "750414",
	I0717 18:05:11.404996   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.405000   50854 command_runner.go:130] >         "value": "65535"
	I0717 18:05:11.405006   50854 command_runner.go:130] >       },
	I0717 18:05:11.405010   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.405015   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.405019   50854 command_runner.go:130] >       "pinned": true
	I0717 18:05:11.405025   50854 command_runner.go:130] >     }
	I0717 18:05:11.405028   50854 command_runner.go:130] >   ]
	I0717 18:05:11.405034   50854 command_runner.go:130] > }
	I0717 18:05:11.405189   50854 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:05:11.405199   50854 crio.go:433] Images already preloaded, skipping extraction
	I0717 18:05:11.405246   50854 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:05:11.435589   50854 command_runner.go:130] > {
	I0717 18:05:11.435610   50854 command_runner.go:130] >   "images": [
	I0717 18:05:11.435614   50854 command_runner.go:130] >     {
	I0717 18:05:11.435626   50854 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 18:05:11.435630   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.435636   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 18:05:11.435639   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435643   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.435672   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 18:05:11.435684   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 18:05:11.435687   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435692   50854 command_runner.go:130] >       "size": "65908273",
	I0717 18:05:11.435696   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.435700   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.435708   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.435714   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.435718   50854 command_runner.go:130] >     },
	I0717 18:05:11.435721   50854 command_runner.go:130] >     {
	I0717 18:05:11.435727   50854 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0717 18:05:11.435731   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.435736   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0717 18:05:11.435742   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435746   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.435752   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0717 18:05:11.435759   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0717 18:05:11.435763   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435767   50854 command_runner.go:130] >       "size": "87165492",
	I0717 18:05:11.435771   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.435777   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.435781   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.435785   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.435788   50854 command_runner.go:130] >     },
	I0717 18:05:11.435792   50854 command_runner.go:130] >     {
	I0717 18:05:11.435798   50854 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 18:05:11.435804   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.435809   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 18:05:11.435812   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435816   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.435824   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 18:05:11.435834   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 18:05:11.435837   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435841   50854 command_runner.go:130] >       "size": "1363676",
	I0717 18:05:11.435845   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.435849   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.435855   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.435859   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.435865   50854 command_runner.go:130] >     },
	I0717 18:05:11.435868   50854 command_runner.go:130] >     {
	I0717 18:05:11.435876   50854 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 18:05:11.435881   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.435887   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 18:05:11.435893   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435897   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.435904   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 18:05:11.435916   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 18:05:11.435921   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435925   50854 command_runner.go:130] >       "size": "31470524",
	I0717 18:05:11.435929   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.435933   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.435937   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.435941   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.435946   50854 command_runner.go:130] >     },
	I0717 18:05:11.435949   50854 command_runner.go:130] >     {
	I0717 18:05:11.435955   50854 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 18:05:11.435961   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.435966   50854 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 18:05:11.435970   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435973   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.435982   50854 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 18:05:11.435990   50854 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 18:05:11.435995   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436000   50854 command_runner.go:130] >       "size": "61245718",
	I0717 18:05:11.436005   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.436009   50854 command_runner.go:130] >       "username": "nonroot",
	I0717 18:05:11.436015   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436019   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.436022   50854 command_runner.go:130] >     },
	I0717 18:05:11.436025   50854 command_runner.go:130] >     {
	I0717 18:05:11.436031   50854 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 18:05:11.436037   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.436042   50854 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 18:05:11.436047   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436051   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.436060   50854 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 18:05:11.436067   50854 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 18:05:11.436072   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436077   50854 command_runner.go:130] >       "size": "150779692",
	I0717 18:05:11.436080   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.436084   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.436088   50854 command_runner.go:130] >       },
	I0717 18:05:11.436092   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.436096   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436102   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.436105   50854 command_runner.go:130] >     },
	I0717 18:05:11.436110   50854 command_runner.go:130] >     {
	I0717 18:05:11.436118   50854 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 18:05:11.436122   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.436127   50854 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 18:05:11.436133   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436137   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.436144   50854 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 18:05:11.436153   50854 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 18:05:11.436156   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436160   50854 command_runner.go:130] >       "size": "117609954",
	I0717 18:05:11.436167   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.436171   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.436175   50854 command_runner.go:130] >       },
	I0717 18:05:11.436179   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.436185   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436189   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.436194   50854 command_runner.go:130] >     },
	I0717 18:05:11.436197   50854 command_runner.go:130] >     {
	I0717 18:05:11.436203   50854 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 18:05:11.436209   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.436214   50854 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 18:05:11.436220   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436223   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.436236   50854 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 18:05:11.436246   50854 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 18:05:11.436249   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436253   50854 command_runner.go:130] >       "size": "112194888",
	I0717 18:05:11.436257   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.436261   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.436264   50854 command_runner.go:130] >       },
	I0717 18:05:11.436268   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.436272   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436275   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.436279   50854 command_runner.go:130] >     },
	I0717 18:05:11.436282   50854 command_runner.go:130] >     {
	I0717 18:05:11.436288   50854 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 18:05:11.436294   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.436299   50854 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 18:05:11.436304   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436308   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.436317   50854 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 18:05:11.436324   50854 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 18:05:11.436329   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436333   50854 command_runner.go:130] >       "size": "85953433",
	I0717 18:05:11.436338   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.436342   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.436348   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436352   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.436355   50854 command_runner.go:130] >     },
	I0717 18:05:11.436359   50854 command_runner.go:130] >     {
	I0717 18:05:11.436371   50854 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 18:05:11.436377   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.436381   50854 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 18:05:11.436387   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436391   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.436400   50854 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 18:05:11.436407   50854 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 18:05:11.436412   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436416   50854 command_runner.go:130] >       "size": "63051080",
	I0717 18:05:11.436420   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.436423   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.436427   50854 command_runner.go:130] >       },
	I0717 18:05:11.436430   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.436434   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436438   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.436442   50854 command_runner.go:130] >     },
	I0717 18:05:11.436447   50854 command_runner.go:130] >     {
	I0717 18:05:11.436453   50854 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 18:05:11.436458   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.436463   50854 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 18:05:11.436469   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436473   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.436479   50854 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 18:05:11.436488   50854 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 18:05:11.436491   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436495   50854 command_runner.go:130] >       "size": "750414",
	I0717 18:05:11.436498   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.436502   50854 command_runner.go:130] >         "value": "65535"
	I0717 18:05:11.436505   50854 command_runner.go:130] >       },
	I0717 18:05:11.436509   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.436515   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436518   50854 command_runner.go:130] >       "pinned": true
	I0717 18:05:11.436521   50854 command_runner.go:130] >     }
	I0717 18:05:11.436524   50854 command_runner.go:130] >   ]
	I0717 18:05:11.436530   50854 command_runner.go:130] > }
	I0717 18:05:11.437063   50854 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:05:11.437076   50854 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:05:11.437086   50854 kubeadm.go:934] updating node { 192.168.39.16 8443 v1.30.2 crio true true} ...
	I0717 18:05:11.437177   50854 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-866205 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-866205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:05:11.437243   50854 ssh_runner.go:195] Run: crio config
	I0717 18:05:11.469210   50854 command_runner.go:130] ! time="2024-07-17 18:05:11.440928153Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0717 18:05:11.475188   50854 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 18:05:11.481102   50854 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 18:05:11.481120   50854 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 18:05:11.481126   50854 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 18:05:11.481129   50854 command_runner.go:130] > #
	I0717 18:05:11.481150   50854 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 18:05:11.481163   50854 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 18:05:11.481172   50854 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 18:05:11.481183   50854 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 18:05:11.481189   50854 command_runner.go:130] > # reload'.
	I0717 18:05:11.481201   50854 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 18:05:11.481213   50854 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 18:05:11.481225   50854 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 18:05:11.481236   50854 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 18:05:11.481242   50854 command_runner.go:130] > [crio]
	I0717 18:05:11.481252   50854 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 18:05:11.481263   50854 command_runner.go:130] > # containers images, in this directory.
	I0717 18:05:11.481268   50854 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 18:05:11.481275   50854 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 18:05:11.481283   50854 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 18:05:11.481294   50854 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0717 18:05:11.481303   50854 command_runner.go:130] > # imagestore = ""
	I0717 18:05:11.481320   50854 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 18:05:11.481333   50854 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 18:05:11.481342   50854 command_runner.go:130] > storage_driver = "overlay"
	I0717 18:05:11.481353   50854 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 18:05:11.481365   50854 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 18:05:11.481376   50854 command_runner.go:130] > storage_option = [
	I0717 18:05:11.481387   50854 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 18:05:11.481395   50854 command_runner.go:130] > ]
	I0717 18:05:11.481406   50854 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 18:05:11.481419   50854 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 18:05:11.481428   50854 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 18:05:11.481435   50854 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 18:05:11.481444   50854 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 18:05:11.481450   50854 command_runner.go:130] > # always happen on a node reboot
	I0717 18:05:11.481455   50854 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 18:05:11.481466   50854 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 18:05:11.481474   50854 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 18:05:11.481480   50854 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 18:05:11.481487   50854 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0717 18:05:11.481494   50854 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 18:05:11.481504   50854 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 18:05:11.481510   50854 command_runner.go:130] > # internal_wipe = true
	I0717 18:05:11.481517   50854 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0717 18:05:11.481524   50854 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0717 18:05:11.481528   50854 command_runner.go:130] > # internal_repair = false
	I0717 18:05:11.481536   50854 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 18:05:11.481543   50854 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 18:05:11.481548   50854 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 18:05:11.481555   50854 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 18:05:11.481561   50854 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 18:05:11.481569   50854 command_runner.go:130] > [crio.api]
	I0717 18:05:11.481576   50854 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 18:05:11.481583   50854 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 18:05:11.481588   50854 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 18:05:11.481595   50854 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 18:05:11.481601   50854 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 18:05:11.481608   50854 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 18:05:11.481612   50854 command_runner.go:130] > # stream_port = "0"
	I0717 18:05:11.481620   50854 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 18:05:11.481626   50854 command_runner.go:130] > # stream_enable_tls = false
	I0717 18:05:11.481632   50854 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 18:05:11.481638   50854 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 18:05:11.481645   50854 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 18:05:11.481653   50854 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 18:05:11.481657   50854 command_runner.go:130] > # minutes.
	I0717 18:05:11.481661   50854 command_runner.go:130] > # stream_tls_cert = ""
	I0717 18:05:11.481669   50854 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 18:05:11.481675   50854 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 18:05:11.481682   50854 command_runner.go:130] > # stream_tls_key = ""
	I0717 18:05:11.481687   50854 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 18:05:11.481695   50854 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 18:05:11.481708   50854 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 18:05:11.481714   50854 command_runner.go:130] > # stream_tls_ca = ""
	I0717 18:05:11.481721   50854 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 18:05:11.481727   50854 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 18:05:11.481735   50854 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 18:05:11.481741   50854 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 18:05:11.481747   50854 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 18:05:11.481755   50854 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 18:05:11.481759   50854 command_runner.go:130] > [crio.runtime]
	I0717 18:05:11.481765   50854 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 18:05:11.481772   50854 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 18:05:11.481779   50854 command_runner.go:130] > # "nofile=1024:2048"
	I0717 18:05:11.481785   50854 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 18:05:11.481791   50854 command_runner.go:130] > # default_ulimits = [
	I0717 18:05:11.481794   50854 command_runner.go:130] > # ]
	I0717 18:05:11.481803   50854 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 18:05:11.481808   50854 command_runner.go:130] > # no_pivot = false
	I0717 18:05:11.481814   50854 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 18:05:11.481821   50854 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 18:05:11.481828   50854 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 18:05:11.481833   50854 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 18:05:11.481840   50854 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 18:05:11.481846   50854 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 18:05:11.481853   50854 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 18:05:11.481857   50854 command_runner.go:130] > # Cgroup setting for conmon
	I0717 18:05:11.481865   50854 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 18:05:11.481876   50854 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 18:05:11.481885   50854 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 18:05:11.481892   50854 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 18:05:11.481898   50854 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 18:05:11.481904   50854 command_runner.go:130] > conmon_env = [
	I0717 18:05:11.481909   50854 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 18:05:11.481914   50854 command_runner.go:130] > ]
	I0717 18:05:11.481919   50854 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 18:05:11.481926   50854 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 18:05:11.481932   50854 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 18:05:11.481937   50854 command_runner.go:130] > # default_env = [
	I0717 18:05:11.481941   50854 command_runner.go:130] > # ]
	I0717 18:05:11.481948   50854 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 18:05:11.481957   50854 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0717 18:05:11.481964   50854 command_runner.go:130] > # selinux = false
	I0717 18:05:11.481970   50854 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 18:05:11.481978   50854 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 18:05:11.481985   50854 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 18:05:11.481989   50854 command_runner.go:130] > # seccomp_profile = ""
	I0717 18:05:11.481996   50854 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 18:05:11.482002   50854 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 18:05:11.482009   50854 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 18:05:11.482014   50854 command_runner.go:130] > # which might increase security.
	I0717 18:05:11.482019   50854 command_runner.go:130] > # This option is currently deprecated,
	I0717 18:05:11.482026   50854 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0717 18:05:11.482034   50854 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 18:05:11.482040   50854 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 18:05:11.482048   50854 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 18:05:11.482056   50854 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 18:05:11.482062   50854 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 18:05:11.482068   50854 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:05:11.482072   50854 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 18:05:11.482078   50854 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 18:05:11.482084   50854 command_runner.go:130] > # the cgroup blockio controller.
	I0717 18:05:11.482088   50854 command_runner.go:130] > # blockio_config_file = ""
	I0717 18:05:11.482096   50854 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0717 18:05:11.482102   50854 command_runner.go:130] > # blockio parameters.
	I0717 18:05:11.482106   50854 command_runner.go:130] > # blockio_reload = false
	I0717 18:05:11.482114   50854 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 18:05:11.482120   50854 command_runner.go:130] > # irqbalance daemon.
	I0717 18:05:11.482125   50854 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 18:05:11.482132   50854 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0717 18:05:11.482140   50854 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0717 18:05:11.482149   50854 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0717 18:05:11.482157   50854 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0717 18:05:11.482165   50854 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 18:05:11.482170   50854 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:05:11.482175   50854 command_runner.go:130] > # rdt_config_file = ""
	I0717 18:05:11.482180   50854 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 18:05:11.482186   50854 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 18:05:11.482202   50854 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 18:05:11.482209   50854 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 18:05:11.482215   50854 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 18:05:11.482220   50854 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 18:05:11.482226   50854 command_runner.go:130] > # will be added.
	I0717 18:05:11.482230   50854 command_runner.go:130] > # default_capabilities = [
	I0717 18:05:11.482235   50854 command_runner.go:130] > # 	"CHOWN",
	I0717 18:05:11.482239   50854 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 18:05:11.482245   50854 command_runner.go:130] > # 	"FSETID",
	I0717 18:05:11.482249   50854 command_runner.go:130] > # 	"FOWNER",
	I0717 18:05:11.482255   50854 command_runner.go:130] > # 	"SETGID",
	I0717 18:05:11.482259   50854 command_runner.go:130] > # 	"SETUID",
	I0717 18:05:11.482264   50854 command_runner.go:130] > # 	"SETPCAP",
	I0717 18:05:11.482268   50854 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 18:05:11.482275   50854 command_runner.go:130] > # 	"KILL",
	I0717 18:05:11.482278   50854 command_runner.go:130] > # ]
	I0717 18:05:11.482291   50854 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 18:05:11.482311   50854 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 18:05:11.482325   50854 command_runner.go:130] > # add_inheritable_capabilities = false
	I0717 18:05:11.482337   50854 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 18:05:11.482349   50854 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 18:05:11.482358   50854 command_runner.go:130] > default_sysctls = [
	I0717 18:05:11.482367   50854 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0717 18:05:11.482371   50854 command_runner.go:130] > ]
	I0717 18:05:11.482378   50854 command_runner.go:130] > # List of devices on the host that a
	I0717 18:05:11.482384   50854 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 18:05:11.482390   50854 command_runner.go:130] > # allowed_devices = [
	I0717 18:05:11.482394   50854 command_runner.go:130] > # 	"/dev/fuse",
	I0717 18:05:11.482399   50854 command_runner.go:130] > # ]
	I0717 18:05:11.482404   50854 command_runner.go:130] > # List of additional devices. specified as
	I0717 18:05:11.482413   50854 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 18:05:11.482421   50854 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 18:05:11.482428   50854 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 18:05:11.482433   50854 command_runner.go:130] > # additional_devices = [
	I0717 18:05:11.482438   50854 command_runner.go:130] > # ]
	I0717 18:05:11.482442   50854 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 18:05:11.482448   50854 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 18:05:11.482452   50854 command_runner.go:130] > # 	"/etc/cdi",
	I0717 18:05:11.482458   50854 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 18:05:11.482461   50854 command_runner.go:130] > # ]
	I0717 18:05:11.482467   50854 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 18:05:11.482475   50854 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 18:05:11.482480   50854 command_runner.go:130] > # Defaults to false.
	I0717 18:05:11.482484   50854 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 18:05:11.482493   50854 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 18:05:11.482501   50854 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 18:05:11.482505   50854 command_runner.go:130] > # hooks_dir = [
	I0717 18:05:11.482513   50854 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 18:05:11.482519   50854 command_runner.go:130] > # ]
	I0717 18:05:11.482525   50854 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 18:05:11.482533   50854 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 18:05:11.482540   50854 command_runner.go:130] > # its default mounts from the following two files:
	I0717 18:05:11.482543   50854 command_runner.go:130] > #
	I0717 18:05:11.482552   50854 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 18:05:11.482560   50854 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 18:05:11.482569   50854 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 18:05:11.482575   50854 command_runner.go:130] > #
	I0717 18:05:11.482581   50854 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 18:05:11.482589   50854 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 18:05:11.482597   50854 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 18:05:11.482604   50854 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 18:05:11.482608   50854 command_runner.go:130] > #
	I0717 18:05:11.482614   50854 command_runner.go:130] > # default_mounts_file = ""
	I0717 18:05:11.482620   50854 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 18:05:11.482628   50854 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 18:05:11.482634   50854 command_runner.go:130] > pids_limit = 1024
	I0717 18:05:11.482640   50854 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 18:05:11.482647   50854 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 18:05:11.482656   50854 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 18:05:11.482665   50854 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 18:05:11.482672   50854 command_runner.go:130] > # log_size_max = -1
	I0717 18:05:11.482678   50854 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0717 18:05:11.482685   50854 command_runner.go:130] > # log_to_journald = false
	I0717 18:05:11.482691   50854 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 18:05:11.482697   50854 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 18:05:11.482702   50854 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 18:05:11.482709   50854 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 18:05:11.482714   50854 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 18:05:11.482720   50854 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 18:05:11.482725   50854 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 18:05:11.482731   50854 command_runner.go:130] > # read_only = false
	I0717 18:05:11.482737   50854 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 18:05:11.482744   50854 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 18:05:11.482751   50854 command_runner.go:130] > # live configuration reload.
	I0717 18:05:11.482755   50854 command_runner.go:130] > # log_level = "info"
	I0717 18:05:11.482763   50854 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 18:05:11.482769   50854 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:05:11.482775   50854 command_runner.go:130] > # log_filter = ""
	I0717 18:05:11.482781   50854 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 18:05:11.482790   50854 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 18:05:11.482794   50854 command_runner.go:130] > # separated by comma.
	I0717 18:05:11.482803   50854 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:05:11.482809   50854 command_runner.go:130] > # uid_mappings = ""
	I0717 18:05:11.482815   50854 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 18:05:11.482822   50854 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 18:05:11.482829   50854 command_runner.go:130] > # separated by comma.
	I0717 18:05:11.482837   50854 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:05:11.482843   50854 command_runner.go:130] > # gid_mappings = ""
	I0717 18:05:11.482849   50854 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 18:05:11.482857   50854 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 18:05:11.482865   50854 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 18:05:11.482873   50854 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:05:11.482879   50854 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 18:05:11.482884   50854 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 18:05:11.482892   50854 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 18:05:11.482900   50854 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 18:05:11.482910   50854 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:05:11.482916   50854 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 18:05:11.482921   50854 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 18:05:11.482929   50854 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 18:05:11.482937   50854 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 18:05:11.482942   50854 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 18:05:11.482948   50854 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 18:05:11.482955   50854 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 18:05:11.482962   50854 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 18:05:11.482971   50854 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 18:05:11.482977   50854 command_runner.go:130] > drop_infra_ctr = false
	I0717 18:05:11.482983   50854 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 18:05:11.482990   50854 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 18:05:11.482997   50854 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 18:05:11.483003   50854 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 18:05:11.483010   50854 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0717 18:05:11.483022   50854 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0717 18:05:11.483030   50854 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0717 18:05:11.483037   50854 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0717 18:05:11.483042   50854 command_runner.go:130] > # shared_cpuset = ""
	I0717 18:05:11.483047   50854 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 18:05:11.483052   50854 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 18:05:11.483058   50854 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 18:05:11.483064   50854 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 18:05:11.483070   50854 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 18:05:11.483075   50854 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0717 18:05:11.483083   50854 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0717 18:05:11.483090   50854 command_runner.go:130] > # enable_criu_support = false
	I0717 18:05:11.483095   50854 command_runner.go:130] > # Enable/disable the generation of the container,
	I0717 18:05:11.483102   50854 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0717 18:05:11.483108   50854 command_runner.go:130] > # enable_pod_events = false
	I0717 18:05:11.483115   50854 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 18:05:11.483123   50854 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 18:05:11.483130   50854 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0717 18:05:11.483133   50854 command_runner.go:130] > # default_runtime = "runc"
	I0717 18:05:11.483141   50854 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 18:05:11.483147   50854 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 18:05:11.483157   50854 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0717 18:05:11.483164   50854 command_runner.go:130] > # creation as a file is not desired either.
	I0717 18:05:11.483172   50854 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 18:05:11.483179   50854 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 18:05:11.483185   50854 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 18:05:11.483192   50854 command_runner.go:130] > # ]
	I0717 18:05:11.483202   50854 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 18:05:11.483214   50854 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 18:05:11.483226   50854 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0717 18:05:11.483236   50854 command_runner.go:130] > # Each entry in the table should follow the format:
	I0717 18:05:11.483243   50854 command_runner.go:130] > #
	I0717 18:05:11.483250   50854 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0717 18:05:11.483261   50854 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0717 18:05:11.483286   50854 command_runner.go:130] > # runtime_type = "oci"
	I0717 18:05:11.483296   50854 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0717 18:05:11.483304   50854 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0717 18:05:11.483314   50854 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0717 18:05:11.483327   50854 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0717 18:05:11.483336   50854 command_runner.go:130] > # monitor_env = []
	I0717 18:05:11.483346   50854 command_runner.go:130] > # privileged_without_host_devices = false
	I0717 18:05:11.483356   50854 command_runner.go:130] > # allowed_annotations = []
	I0717 18:05:11.483366   50854 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0717 18:05:11.483372   50854 command_runner.go:130] > # Where:
	I0717 18:05:11.483378   50854 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0717 18:05:11.483386   50854 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0717 18:05:11.483392   50854 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 18:05:11.483400   50854 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 18:05:11.483404   50854 command_runner.go:130] > #   in $PATH.
	I0717 18:05:11.483411   50854 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0717 18:05:11.483418   50854 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 18:05:11.483424   50854 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0717 18:05:11.483430   50854 command_runner.go:130] > #   state.
	I0717 18:05:11.483436   50854 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 18:05:11.483444   50854 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 18:05:11.483453   50854 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 18:05:11.483460   50854 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 18:05:11.483468   50854 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 18:05:11.483474   50854 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 18:05:11.483480   50854 command_runner.go:130] > #   The currently recognized values are:
	I0717 18:05:11.483486   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 18:05:11.483496   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 18:05:11.483504   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 18:05:11.483510   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 18:05:11.483519   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 18:05:11.483527   50854 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 18:05:11.483534   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0717 18:05:11.483541   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0717 18:05:11.483549   50854 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 18:05:11.483555   50854 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0717 18:05:11.483560   50854 command_runner.go:130] > #   deprecated option "conmon".
	I0717 18:05:11.483569   50854 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0717 18:05:11.483576   50854 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0717 18:05:11.483583   50854 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0717 18:05:11.483589   50854 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 18:05:11.483595   50854 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0717 18:05:11.483603   50854 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0717 18:05:11.483609   50854 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0717 18:05:11.483616   50854 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0717 18:05:11.483619   50854 command_runner.go:130] > #
	I0717 18:05:11.483626   50854 command_runner.go:130] > # Using the seccomp notifier feature:
	I0717 18:05:11.483629   50854 command_runner.go:130] > #
	I0717 18:05:11.483635   50854 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0717 18:05:11.483643   50854 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0717 18:05:11.483646   50854 command_runner.go:130] > #
	I0717 18:05:11.483654   50854 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0717 18:05:11.483660   50854 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0717 18:05:11.483665   50854 command_runner.go:130] > #
	I0717 18:05:11.483671   50854 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0717 18:05:11.483676   50854 command_runner.go:130] > # feature.
	I0717 18:05:11.483679   50854 command_runner.go:130] > #
	I0717 18:05:11.483686   50854 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0717 18:05:11.483694   50854 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0717 18:05:11.483702   50854 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0717 18:05:11.483710   50854 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0717 18:05:11.483718   50854 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0717 18:05:11.483721   50854 command_runner.go:130] > #
	I0717 18:05:11.483729   50854 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0717 18:05:11.483734   50854 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0717 18:05:11.483740   50854 command_runner.go:130] > #
	I0717 18:05:11.483745   50854 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0717 18:05:11.483752   50854 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0717 18:05:11.483755   50854 command_runner.go:130] > #
	I0717 18:05:11.483763   50854 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0717 18:05:11.483769   50854 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0717 18:05:11.483775   50854 command_runner.go:130] > # limitation.
	I0717 18:05:11.483780   50854 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 18:05:11.483787   50854 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 18:05:11.483791   50854 command_runner.go:130] > runtime_type = "oci"
	I0717 18:05:11.483795   50854 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 18:05:11.483799   50854 command_runner.go:130] > runtime_config_path = ""
	I0717 18:05:11.483806   50854 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0717 18:05:11.483810   50854 command_runner.go:130] > monitor_cgroup = "pod"
	I0717 18:05:11.483814   50854 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 18:05:11.483818   50854 command_runner.go:130] > monitor_env = [
	I0717 18:05:11.483824   50854 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 18:05:11.483830   50854 command_runner.go:130] > ]
	I0717 18:05:11.483834   50854 command_runner.go:130] > privileged_without_host_devices = false
	I0717 18:05:11.483847   50854 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 18:05:11.483858   50854 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 18:05:11.483871   50854 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 18:05:11.483883   50854 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 18:05:11.483897   50854 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 18:05:11.483907   50854 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 18:05:11.483915   50854 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 18:05:11.483924   50854 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 18:05:11.483930   50854 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 18:05:11.483937   50854 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 18:05:11.483940   50854 command_runner.go:130] > # Example:
	I0717 18:05:11.483944   50854 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 18:05:11.483948   50854 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 18:05:11.483953   50854 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 18:05:11.483957   50854 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 18:05:11.483960   50854 command_runner.go:130] > # cpuset = 0
	I0717 18:05:11.483964   50854 command_runner.go:130] > # cpushares = "0-1"
	I0717 18:05:11.483967   50854 command_runner.go:130] > # Where:
	I0717 18:05:11.483972   50854 command_runner.go:130] > # The workload name is workload-type.
	I0717 18:05:11.483978   50854 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 18:05:11.483983   50854 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 18:05:11.483987   50854 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 18:05:11.483995   50854 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 18:05:11.484008   50854 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 18:05:11.484013   50854 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0717 18:05:11.484021   50854 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0717 18:05:11.484028   50854 command_runner.go:130] > # Default value is set to true
	I0717 18:05:11.484031   50854 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0717 18:05:11.484039   50854 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0717 18:05:11.484045   50854 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0717 18:05:11.484050   50854 command_runner.go:130] > # Default value is set to 'false'
	I0717 18:05:11.484056   50854 command_runner.go:130] > # disable_hostport_mapping = false
	I0717 18:05:11.484062   50854 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 18:05:11.484067   50854 command_runner.go:130] > #
	I0717 18:05:11.484073   50854 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 18:05:11.484079   50854 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 18:05:11.484088   50854 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 18:05:11.484096   50854 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 18:05:11.484104   50854 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 18:05:11.484109   50854 command_runner.go:130] > [crio.image]
	I0717 18:05:11.484115   50854 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 18:05:11.484121   50854 command_runner.go:130] > # default_transport = "docker://"
	I0717 18:05:11.484127   50854 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 18:05:11.484135   50854 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 18:05:11.484139   50854 command_runner.go:130] > # global_auth_file = ""
	I0717 18:05:11.484144   50854 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 18:05:11.484151   50854 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:05:11.484155   50854 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0717 18:05:11.484163   50854 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 18:05:11.484170   50854 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 18:05:11.484176   50854 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:05:11.484181   50854 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 18:05:11.484188   50854 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 18:05:11.484193   50854 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 18:05:11.484201   50854 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 18:05:11.484207   50854 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 18:05:11.484213   50854 command_runner.go:130] > # pause_command = "/pause"
	I0717 18:05:11.484219   50854 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0717 18:05:11.484226   50854 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0717 18:05:11.484235   50854 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0717 18:05:11.484245   50854 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0717 18:05:11.484253   50854 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0717 18:05:11.484261   50854 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0717 18:05:11.484267   50854 command_runner.go:130] > # pinned_images = [
	I0717 18:05:11.484270   50854 command_runner.go:130] > # ]
	I0717 18:05:11.484278   50854 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 18:05:11.484289   50854 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 18:05:11.484301   50854 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 18:05:11.484313   50854 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 18:05:11.484328   50854 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 18:05:11.484337   50854 command_runner.go:130] > # signature_policy = ""
	I0717 18:05:11.484347   50854 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0717 18:05:11.484359   50854 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0717 18:05:11.484370   50854 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0717 18:05:11.484381   50854 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0717 18:05:11.484391   50854 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0717 18:05:11.484400   50854 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0717 18:05:11.484411   50854 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 18:05:11.484422   50854 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 18:05:11.484430   50854 command_runner.go:130] > # changing them here.
	I0717 18:05:11.484439   50854 command_runner.go:130] > # insecure_registries = [
	I0717 18:05:11.484446   50854 command_runner.go:130] > # ]
	I0717 18:05:11.484455   50854 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 18:05:11.484465   50854 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 18:05:11.484474   50854 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 18:05:11.484485   50854 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 18:05:11.484495   50854 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 18:05:11.484507   50854 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 18:05:11.484513   50854 command_runner.go:130] > # CNI plugins.
	I0717 18:05:11.484517   50854 command_runner.go:130] > [crio.network]
	I0717 18:05:11.484523   50854 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 18:05:11.484530   50854 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 18:05:11.484537   50854 command_runner.go:130] > # cni_default_network = ""
	I0717 18:05:11.484543   50854 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 18:05:11.484549   50854 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 18:05:11.484556   50854 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 18:05:11.484562   50854 command_runner.go:130] > # plugin_dirs = [
	I0717 18:05:11.484567   50854 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 18:05:11.484573   50854 command_runner.go:130] > # ]
	I0717 18:05:11.484582   50854 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 18:05:11.484591   50854 command_runner.go:130] > [crio.metrics]
	I0717 18:05:11.484601   50854 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 18:05:11.484610   50854 command_runner.go:130] > enable_metrics = true
	I0717 18:05:11.484619   50854 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 18:05:11.484630   50854 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 18:05:11.484642   50854 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 18:05:11.484655   50854 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 18:05:11.484664   50854 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 18:05:11.484670   50854 command_runner.go:130] > # metrics_collectors = [
	I0717 18:05:11.484674   50854 command_runner.go:130] > # 	"operations",
	I0717 18:05:11.484682   50854 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 18:05:11.484688   50854 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 18:05:11.484693   50854 command_runner.go:130] > # 	"operations_errors",
	I0717 18:05:11.484699   50854 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 18:05:11.484703   50854 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 18:05:11.484710   50854 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 18:05:11.484713   50854 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 18:05:11.484719   50854 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 18:05:11.484724   50854 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 18:05:11.484730   50854 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 18:05:11.484734   50854 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0717 18:05:11.484740   50854 command_runner.go:130] > # 	"containers_oom_total",
	I0717 18:05:11.484744   50854 command_runner.go:130] > # 	"containers_oom",
	I0717 18:05:11.484749   50854 command_runner.go:130] > # 	"processes_defunct",
	I0717 18:05:11.484753   50854 command_runner.go:130] > # 	"operations_total",
	I0717 18:05:11.484760   50854 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 18:05:11.484764   50854 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 18:05:11.484770   50854 command_runner.go:130] > # 	"operations_errors_total",
	I0717 18:05:11.484774   50854 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 18:05:11.484781   50854 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 18:05:11.484786   50854 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 18:05:11.484793   50854 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 18:05:11.484797   50854 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 18:05:11.484803   50854 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 18:05:11.484807   50854 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0717 18:05:11.484813   50854 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0717 18:05:11.484817   50854 command_runner.go:130] > # ]
	I0717 18:05:11.484824   50854 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 18:05:11.484829   50854 command_runner.go:130] > # metrics_port = 9090
	I0717 18:05:11.484833   50854 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 18:05:11.484839   50854 command_runner.go:130] > # metrics_socket = ""
	I0717 18:05:11.484844   50854 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 18:05:11.484851   50854 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 18:05:11.484860   50854 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 18:05:11.484865   50854 command_runner.go:130] > # certificate on any modification event.
	I0717 18:05:11.484871   50854 command_runner.go:130] > # metrics_cert = ""
	I0717 18:05:11.484876   50854 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 18:05:11.484882   50854 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 18:05:11.484886   50854 command_runner.go:130] > # metrics_key = ""
	I0717 18:05:11.484894   50854 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 18:05:11.484898   50854 command_runner.go:130] > [crio.tracing]
	I0717 18:05:11.484905   50854 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 18:05:11.484912   50854 command_runner.go:130] > # enable_tracing = false
	I0717 18:05:11.484917   50854 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 18:05:11.484924   50854 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 18:05:11.484931   50854 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0717 18:05:11.484937   50854 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 18:05:11.484953   50854 command_runner.go:130] > # CRI-O NRI configuration.
	I0717 18:05:11.484962   50854 command_runner.go:130] > [crio.nri]
	I0717 18:05:11.484969   50854 command_runner.go:130] > # Globally enable or disable NRI.
	I0717 18:05:11.484976   50854 command_runner.go:130] > # enable_nri = false
	I0717 18:05:11.484980   50854 command_runner.go:130] > # NRI socket to listen on.
	I0717 18:05:11.484987   50854 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0717 18:05:11.484991   50854 command_runner.go:130] > # NRI plugin directory to use.
	I0717 18:05:11.484996   50854 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0717 18:05:11.485002   50854 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0717 18:05:11.485007   50854 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0717 18:05:11.485015   50854 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0717 18:05:11.485020   50854 command_runner.go:130] > # nri_disable_connections = false
	I0717 18:05:11.485027   50854 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0717 18:05:11.485032   50854 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0717 18:05:11.485039   50854 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0717 18:05:11.485043   50854 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0717 18:05:11.485051   50854 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 18:05:11.485056   50854 command_runner.go:130] > [crio.stats]
	I0717 18:05:11.485062   50854 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 18:05:11.485069   50854 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 18:05:11.485073   50854 command_runner.go:130] > # stats_collection_period = 0
	I0717 18:05:11.485164   50854 cni.go:84] Creating CNI manager for ""
	I0717 18:05:11.485173   50854 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 18:05:11.485180   50854 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:05:11.485203   50854 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-866205 NodeName:multinode-866205 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:05:11.485349   50854 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-866205"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:05:11.485414   50854 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:05:11.495073   50854 command_runner.go:130] > kubeadm
	I0717 18:05:11.495089   50854 command_runner.go:130] > kubectl
	I0717 18:05:11.495093   50854 command_runner.go:130] > kubelet
	I0717 18:05:11.495119   50854 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:05:11.495174   50854 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:05:11.504240   50854 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0717 18:05:11.519535   50854 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:05:11.534162   50854 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 18:05:11.548480   50854 ssh_runner.go:195] Run: grep 192.168.39.16	control-plane.minikube.internal$ /etc/hosts
	I0717 18:05:11.551896   50854 command_runner.go:130] > 192.168.39.16	control-plane.minikube.internal
	I0717 18:05:11.551965   50854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:05:11.689285   50854 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:05:11.703745   50854 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205 for IP: 192.168.39.16
	I0717 18:05:11.703772   50854 certs.go:194] generating shared ca certs ...
	I0717 18:05:11.703802   50854 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:05:11.703978   50854 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:05:11.704024   50854 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:05:11.704035   50854 certs.go:256] generating profile certs ...
	I0717 18:05:11.704137   50854 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/client.key
	I0717 18:05:11.704193   50854 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/apiserver.key.cece838c
	I0717 18:05:11.704238   50854 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/proxy-client.key
	I0717 18:05:11.704250   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:05:11.704265   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:05:11.704280   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:05:11.704297   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:05:11.704317   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:05:11.704373   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:05:11.704405   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:05:11.704421   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:05:11.704486   50854 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:05:11.704517   50854 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:05:11.704528   50854 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:05:11.704568   50854 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:05:11.704594   50854 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:05:11.704618   50854 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:05:11.704658   50854 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:05:11.704689   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem -> /usr/share/ca-certificates/21577.pem
	I0717 18:05:11.704704   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /usr/share/ca-certificates/215772.pem
	I0717 18:05:11.704718   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:05:11.705308   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:05:11.727514   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:05:11.748923   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:05:11.770128   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:05:11.791417   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:05:11.812372   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 18:05:11.833667   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:05:11.854675   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:05:11.875559   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:05:11.897397   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:05:11.918246   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:05:11.939318   50854 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:05:11.953727   50854 ssh_runner.go:195] Run: openssl version
	I0717 18:05:11.959057   50854 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0717 18:05:11.959111   50854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:05:11.968306   50854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:05:11.972164   50854 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:05:11.972215   50854 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:05:11.972244   50854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:05:11.977501   50854 command_runner.go:130] > 3ec20f2e
	I0717 18:05:11.977550   50854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:05:11.985560   50854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:05:11.994905   50854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:05:11.998747   50854 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:05:11.998772   50854 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:05:11.998816   50854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:05:12.003643   50854 command_runner.go:130] > b5213941
	I0717 18:05:12.003771   50854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:05:12.011939   50854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:05:12.021285   50854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:05:12.024993   50854 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:05:12.025121   50854 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:05:12.025154   50854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:05:12.029993   50854 command_runner.go:130] > 51391683
	I0717 18:05:12.030043   50854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:05:12.038299   50854 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:05:12.042372   50854 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:05:12.042388   50854 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0717 18:05:12.042394   50854 command_runner.go:130] > Device: 253,1	Inode: 5245461     Links: 1
	I0717 18:05:12.042400   50854 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 18:05:12.042406   50854 command_runner.go:130] > Access: 2024-07-17 17:58:21.631940860 +0000
	I0717 18:05:12.042411   50854 command_runner.go:130] > Modify: 2024-07-17 17:58:21.631940860 +0000
	I0717 18:05:12.042415   50854 command_runner.go:130] > Change: 2024-07-17 17:58:21.631940860 +0000
	I0717 18:05:12.042419   50854 command_runner.go:130] >  Birth: 2024-07-17 17:58:21.631940860 +0000
	I0717 18:05:12.042561   50854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:05:12.047564   50854 command_runner.go:130] > Certificate will not expire
	I0717 18:05:12.047608   50854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:05:12.052874   50854 command_runner.go:130] > Certificate will not expire
	I0717 18:05:12.053010   50854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:05:12.079190   50854 command_runner.go:130] > Certificate will not expire
	I0717 18:05:12.079261   50854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:05:12.084502   50854 command_runner.go:130] > Certificate will not expire
	I0717 18:05:12.084553   50854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:05:12.089992   50854 command_runner.go:130] > Certificate will not expire
	I0717 18:05:12.090053   50854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:05:12.095140   50854 command_runner.go:130] > Certificate will not expire
	I0717 18:05:12.095212   50854 kubeadm.go:392] StartCluster: {Name:multinode-866205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-866205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:05:12.095353   50854 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:05:12.095416   50854 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:05:12.127180   50854 command_runner.go:130] > 4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08
	I0717 18:05:12.127205   50854 command_runner.go:130] > 1815402f04f919e11c3a96aaf379eccdbfe300319fe17e0acba5022a4aa426f7
	I0717 18:05:12.127211   50854 command_runner.go:130] > 6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef
	I0717 18:05:12.127217   50854 command_runner.go:130] > 53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106
	I0717 18:05:12.127223   50854 command_runner.go:130] > 390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47
	I0717 18:05:12.127228   50854 command_runner.go:130] > bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a
	I0717 18:05:12.127238   50854 command_runner.go:130] > 5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f
	I0717 18:05:12.127245   50854 command_runner.go:130] > 768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d
	I0717 18:05:12.128504   50854 cri.go:89] found id: "4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08"
	I0717 18:05:12.128520   50854 cri.go:89] found id: "1815402f04f919e11c3a96aaf379eccdbfe300319fe17e0acba5022a4aa426f7"
	I0717 18:05:12.128524   50854 cri.go:89] found id: "6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef"
	I0717 18:05:12.128528   50854 cri.go:89] found id: "53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106"
	I0717 18:05:12.128530   50854 cri.go:89] found id: "390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47"
	I0717 18:05:12.128534   50854 cri.go:89] found id: "bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a"
	I0717 18:05:12.128537   50854 cri.go:89] found id: "5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f"
	I0717 18:05:12.128539   50854 cri.go:89] found id: "768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d"
	I0717 18:05:12.128542   50854 cri.go:89] found id: ""
	I0717 18:05:12.128589   50854 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 18:06:54 multinode-866205 crio[2868]: time="2024-07-17 18:06:54.967349872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721239614967284591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5469dd95-0f79-478c-a088-f079c8d1e19e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:06:54 multinode-866205 crio[2868]: time="2024-07-17 18:06:54.967820665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a83a1a9b-627b-4450-bce0-0ae7bc026535 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:06:54 multinode-866205 crio[2868]: time="2024-07-17 18:06:54.967874459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a83a1a9b-627b-4450-bce0-0ae7bc026535 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:06:54 multinode-866205 crio[2868]: time="2024-07-17 18:06:54.968392275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4715a46e2baec137f37988949e5f783704acfefe3e92a3d4a0aa39dd54c648ca,PodSandboxId:8a810590d3716f6035bdd86963ffd02d2b98a3bec1491cd7afce399f2d77c915,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721239552802655844,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59bab707068cccd4ff807dfb4cbe1c9164ce49d80ecdb3334e035447c895132,PodSandboxId:43ad8250f0304816f7aca4c6eb7b33d619d35eceeb0728f21fcce0eeb1ed9f27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721239519346970397,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf7a6123dfc52831b70a4b7ab26667dc5dbfd3b6224dced4120c793504007930,PodSandboxId:3334be3daed4423efb4c5526619492349698fcf76ec835cf716d61803c2468e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239519337385540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126acaa753011c9c50ef72aaf9414bcf13f76b5f32b145885055dc58284112f0,PodSandboxId:130d890a0a2f25a870b2ad00d6a69f31bd2465561843ddc5f4561b6c17ffb3e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239519162543359,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},An
notations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a600dc9dc4cbfe810d7bfdddf4001a2a6835b4f561ddee8ec89a3b97c0781e7e,PodSandboxId:f5a233028b6dbe79a4f81ad15478623db2e0b0fc0266375c0785bfc00d0fe23e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239519092702782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.ku
bernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9b4a09a89d9644c160bc53340de6d77de564f71b2afa786f3a582fdabfda56,PodSandboxId:35a360b43087033a085d7395fb453963cb00fd9958c9e64e1ecac26da1336029,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239514321889051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f771f4846c09fc27b0dda60952111f83f446d6df7eaf2a8998a5a20c2489aa45,PodSandboxId:8d8ee5895be8cdf42d8a5d3315f4fd0e2d6953134f1db2151287c953b2f775fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239514267128470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e8e3edcb645f3c0ff2b2960f9eec7a22f72853f96d97b2a2f1a60774be4ecd,PodSandboxId:eddf0dc388923d38de15f221cada729d343eedc7bb7e6263b323c4494e610d2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239514276879924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85689b761c08356d8c72ddbb6741d7811846bc176e5620e1f16292d2405380d2,PodSandboxId:0d667fdc790e4295acfcd1853c0d6d179a94cbb3a2a6d2c1b8bb2fdf763ac335,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239514193889527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:036b3403e707124943825446aceca9e338fc1ad99d10a5fcb05ee5517fb831aa,PodSandboxId:f8e0d93c1dfef807c220fb730ad6a45f781d414dc379d0c5b88920d16ededd46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721239192830223235,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08,PodSandboxId:64f0544b273e837cc65e06e2daef1c2dff00a450bd15743d29d37db7f39428ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721239140324421303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1815402f04f919e11c3a96aaf379eccdbfe300319fe17e0acba5022a4aa426f7,PodSandboxId:4340bcf9e6fc2a296e9f277f39aebcf6b017024c2f0ccf7c4189a18216254786,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721239140265764506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},Annotations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef,PodSandboxId:af144f702171c92d505f826276bacfb71330149c01daa5fc2e1d2c2e2dac8889,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721239128563961477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106,PodSandboxId:699ce739fe6ca4d1b4b158cf41ff1ac719699fa57d2b3109d79ac3eea632728b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721239125014872368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.kubernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47,PodSandboxId:34e758884b69c4fce261741223b7e26143eb367fa6ed14938c7eb87c5afea287,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721239105888131919,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb
7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d,PodSandboxId:3dc429606d85505f6988a036253115b79a69058203d271847a8a06b8eee06c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721239105807182680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations
:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a,PodSandboxId:bec9243ce95ed60e667350a889a7f4b3b9a0523ee5a4261a31fea8492a8cb0dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721239105832036373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f,PodSandboxId:cef1c1c90ad7bcd6131429de1911c31541baccb72239ea5517e9b3d46d6ca94a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721239105812608924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map
[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a83a1a9b-627b-4450-bce0-0ae7bc026535 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.006540879Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c6341bd-f175-40fd-add6-758a2c55061a name=/runtime.v1.RuntimeService/Version
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.006626642Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c6341bd-f175-40fd-add6-758a2c55061a name=/runtime.v1.RuntimeService/Version
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.008096224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=311899af-d746-4b05-aa38-3b1a0509f824 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.008666775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721239615008640609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=311899af-d746-4b05-aa38-3b1a0509f824 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.009141189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdf5803e-bcc8-4a7b-b45b-10e85fc540be name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.009194826Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdf5803e-bcc8-4a7b-b45b-10e85fc540be name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.009579974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4715a46e2baec137f37988949e5f783704acfefe3e92a3d4a0aa39dd54c648ca,PodSandboxId:8a810590d3716f6035bdd86963ffd02d2b98a3bec1491cd7afce399f2d77c915,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721239552802655844,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59bab707068cccd4ff807dfb4cbe1c9164ce49d80ecdb3334e035447c895132,PodSandboxId:43ad8250f0304816f7aca4c6eb7b33d619d35eceeb0728f21fcce0eeb1ed9f27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721239519346970397,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf7a6123dfc52831b70a4b7ab26667dc5dbfd3b6224dced4120c793504007930,PodSandboxId:3334be3daed4423efb4c5526619492349698fcf76ec835cf716d61803c2468e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239519337385540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126acaa753011c9c50ef72aaf9414bcf13f76b5f32b145885055dc58284112f0,PodSandboxId:130d890a0a2f25a870b2ad00d6a69f31bd2465561843ddc5f4561b6c17ffb3e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239519162543359,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},An
notations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a600dc9dc4cbfe810d7bfdddf4001a2a6835b4f561ddee8ec89a3b97c0781e7e,PodSandboxId:f5a233028b6dbe79a4f81ad15478623db2e0b0fc0266375c0785bfc00d0fe23e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239519092702782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.ku
bernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9b4a09a89d9644c160bc53340de6d77de564f71b2afa786f3a582fdabfda56,PodSandboxId:35a360b43087033a085d7395fb453963cb00fd9958c9e64e1ecac26da1336029,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239514321889051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f771f4846c09fc27b0dda60952111f83f446d6df7eaf2a8998a5a20c2489aa45,PodSandboxId:8d8ee5895be8cdf42d8a5d3315f4fd0e2d6953134f1db2151287c953b2f775fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239514267128470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e8e3edcb645f3c0ff2b2960f9eec7a22f72853f96d97b2a2f1a60774be4ecd,PodSandboxId:eddf0dc388923d38de15f221cada729d343eedc7bb7e6263b323c4494e610d2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239514276879924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85689b761c08356d8c72ddbb6741d7811846bc176e5620e1f16292d2405380d2,PodSandboxId:0d667fdc790e4295acfcd1853c0d6d179a94cbb3a2a6d2c1b8bb2fdf763ac335,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239514193889527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:036b3403e707124943825446aceca9e338fc1ad99d10a5fcb05ee5517fb831aa,PodSandboxId:f8e0d93c1dfef807c220fb730ad6a45f781d414dc379d0c5b88920d16ededd46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721239192830223235,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08,PodSandboxId:64f0544b273e837cc65e06e2daef1c2dff00a450bd15743d29d37db7f39428ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721239140324421303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1815402f04f919e11c3a96aaf379eccdbfe300319fe17e0acba5022a4aa426f7,PodSandboxId:4340bcf9e6fc2a296e9f277f39aebcf6b017024c2f0ccf7c4189a18216254786,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721239140265764506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},Annotations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef,PodSandboxId:af144f702171c92d505f826276bacfb71330149c01daa5fc2e1d2c2e2dac8889,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721239128563961477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106,PodSandboxId:699ce739fe6ca4d1b4b158cf41ff1ac719699fa57d2b3109d79ac3eea632728b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721239125014872368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.kubernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47,PodSandboxId:34e758884b69c4fce261741223b7e26143eb367fa6ed14938c7eb87c5afea287,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721239105888131919,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb
7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d,PodSandboxId:3dc429606d85505f6988a036253115b79a69058203d271847a8a06b8eee06c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721239105807182680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations
:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a,PodSandboxId:bec9243ce95ed60e667350a889a7f4b3b9a0523ee5a4261a31fea8492a8cb0dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721239105832036373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f,PodSandboxId:cef1c1c90ad7bcd6131429de1911c31541baccb72239ea5517e9b3d46d6ca94a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721239105812608924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map
[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdf5803e-bcc8-4a7b-b45b-10e85fc540be name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.049595160Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=546ee95d-266b-4cd2-ae76-707f37f06943 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.049686646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=546ee95d-266b-4cd2-ae76-707f37f06943 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.051043871Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e30b201-ffcc-4c3d-a994-7020cd533a99 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.051704051Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721239615051677925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e30b201-ffcc-4c3d-a994-7020cd533a99 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.052285233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6216a4eb-edc1-4efa-84c3-f2af01e42c94 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.052388923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6216a4eb-edc1-4efa-84c3-f2af01e42c94 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.052786811Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4715a46e2baec137f37988949e5f783704acfefe3e92a3d4a0aa39dd54c648ca,PodSandboxId:8a810590d3716f6035bdd86963ffd02d2b98a3bec1491cd7afce399f2d77c915,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721239552802655844,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59bab707068cccd4ff807dfb4cbe1c9164ce49d80ecdb3334e035447c895132,PodSandboxId:43ad8250f0304816f7aca4c6eb7b33d619d35eceeb0728f21fcce0eeb1ed9f27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721239519346970397,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf7a6123dfc52831b70a4b7ab26667dc5dbfd3b6224dced4120c793504007930,PodSandboxId:3334be3daed4423efb4c5526619492349698fcf76ec835cf716d61803c2468e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239519337385540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126acaa753011c9c50ef72aaf9414bcf13f76b5f32b145885055dc58284112f0,PodSandboxId:130d890a0a2f25a870b2ad00d6a69f31bd2465561843ddc5f4561b6c17ffb3e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239519162543359,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},An
notations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a600dc9dc4cbfe810d7bfdddf4001a2a6835b4f561ddee8ec89a3b97c0781e7e,PodSandboxId:f5a233028b6dbe79a4f81ad15478623db2e0b0fc0266375c0785bfc00d0fe23e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239519092702782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.ku
bernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9b4a09a89d9644c160bc53340de6d77de564f71b2afa786f3a582fdabfda56,PodSandboxId:35a360b43087033a085d7395fb453963cb00fd9958c9e64e1ecac26da1336029,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239514321889051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f771f4846c09fc27b0dda60952111f83f446d6df7eaf2a8998a5a20c2489aa45,PodSandboxId:8d8ee5895be8cdf42d8a5d3315f4fd0e2d6953134f1db2151287c953b2f775fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239514267128470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e8e3edcb645f3c0ff2b2960f9eec7a22f72853f96d97b2a2f1a60774be4ecd,PodSandboxId:eddf0dc388923d38de15f221cada729d343eedc7bb7e6263b323c4494e610d2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239514276879924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85689b761c08356d8c72ddbb6741d7811846bc176e5620e1f16292d2405380d2,PodSandboxId:0d667fdc790e4295acfcd1853c0d6d179a94cbb3a2a6d2c1b8bb2fdf763ac335,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239514193889527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:036b3403e707124943825446aceca9e338fc1ad99d10a5fcb05ee5517fb831aa,PodSandboxId:f8e0d93c1dfef807c220fb730ad6a45f781d414dc379d0c5b88920d16ededd46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721239192830223235,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08,PodSandboxId:64f0544b273e837cc65e06e2daef1c2dff00a450bd15743d29d37db7f39428ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721239140324421303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1815402f04f919e11c3a96aaf379eccdbfe300319fe17e0acba5022a4aa426f7,PodSandboxId:4340bcf9e6fc2a296e9f277f39aebcf6b017024c2f0ccf7c4189a18216254786,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721239140265764506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},Annotations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef,PodSandboxId:af144f702171c92d505f826276bacfb71330149c01daa5fc2e1d2c2e2dac8889,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721239128563961477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106,PodSandboxId:699ce739fe6ca4d1b4b158cf41ff1ac719699fa57d2b3109d79ac3eea632728b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721239125014872368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.kubernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47,PodSandboxId:34e758884b69c4fce261741223b7e26143eb367fa6ed14938c7eb87c5afea287,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721239105888131919,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb
7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d,PodSandboxId:3dc429606d85505f6988a036253115b79a69058203d271847a8a06b8eee06c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721239105807182680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations
:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a,PodSandboxId:bec9243ce95ed60e667350a889a7f4b3b9a0523ee5a4261a31fea8492a8cb0dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721239105832036373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f,PodSandboxId:cef1c1c90ad7bcd6131429de1911c31541baccb72239ea5517e9b3d46d6ca94a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721239105812608924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map
[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6216a4eb-edc1-4efa-84c3-f2af01e42c94 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.095286374Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68e65c4f-4fe6-4b92-bca1-530c8deae6d5 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.095425405Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68e65c4f-4fe6-4b92-bca1-530c8deae6d5 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.097075272Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b190b00f-3e14-43c3-8833-1a5242161afc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.097605497Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721239615097555361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b190b00f-3e14-43c3-8833-1a5242161afc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.098115661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0d206fb-1d1a-4cac-9539-e38af171279a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.098179612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0d206fb-1d1a-4cac-9539-e38af171279a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:06:55 multinode-866205 crio[2868]: time="2024-07-17 18:06:55.098913758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4715a46e2baec137f37988949e5f783704acfefe3e92a3d4a0aa39dd54c648ca,PodSandboxId:8a810590d3716f6035bdd86963ffd02d2b98a3bec1491cd7afce399f2d77c915,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721239552802655844,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59bab707068cccd4ff807dfb4cbe1c9164ce49d80ecdb3334e035447c895132,PodSandboxId:43ad8250f0304816f7aca4c6eb7b33d619d35eceeb0728f21fcce0eeb1ed9f27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721239519346970397,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf7a6123dfc52831b70a4b7ab26667dc5dbfd3b6224dced4120c793504007930,PodSandboxId:3334be3daed4423efb4c5526619492349698fcf76ec835cf716d61803c2468e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239519337385540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126acaa753011c9c50ef72aaf9414bcf13f76b5f32b145885055dc58284112f0,PodSandboxId:130d890a0a2f25a870b2ad00d6a69f31bd2465561843ddc5f4561b6c17ffb3e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239519162543359,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},An
notations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a600dc9dc4cbfe810d7bfdddf4001a2a6835b4f561ddee8ec89a3b97c0781e7e,PodSandboxId:f5a233028b6dbe79a4f81ad15478623db2e0b0fc0266375c0785bfc00d0fe23e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239519092702782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.ku
bernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9b4a09a89d9644c160bc53340de6d77de564f71b2afa786f3a582fdabfda56,PodSandboxId:35a360b43087033a085d7395fb453963cb00fd9958c9e64e1ecac26da1336029,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239514321889051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f771f4846c09fc27b0dda60952111f83f446d6df7eaf2a8998a5a20c2489aa45,PodSandboxId:8d8ee5895be8cdf42d8a5d3315f4fd0e2d6953134f1db2151287c953b2f775fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239514267128470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e8e3edcb645f3c0ff2b2960f9eec7a22f72853f96d97b2a2f1a60774be4ecd,PodSandboxId:eddf0dc388923d38de15f221cada729d343eedc7bb7e6263b323c4494e610d2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239514276879924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85689b761c08356d8c72ddbb6741d7811846bc176e5620e1f16292d2405380d2,PodSandboxId:0d667fdc790e4295acfcd1853c0d6d179a94cbb3a2a6d2c1b8bb2fdf763ac335,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239514193889527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:036b3403e707124943825446aceca9e338fc1ad99d10a5fcb05ee5517fb831aa,PodSandboxId:f8e0d93c1dfef807c220fb730ad6a45f781d414dc379d0c5b88920d16ededd46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721239192830223235,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08,PodSandboxId:64f0544b273e837cc65e06e2daef1c2dff00a450bd15743d29d37db7f39428ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721239140324421303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1815402f04f919e11c3a96aaf379eccdbfe300319fe17e0acba5022a4aa426f7,PodSandboxId:4340bcf9e6fc2a296e9f277f39aebcf6b017024c2f0ccf7c4189a18216254786,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721239140265764506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},Annotations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef,PodSandboxId:af144f702171c92d505f826276bacfb71330149c01daa5fc2e1d2c2e2dac8889,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721239128563961477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106,PodSandboxId:699ce739fe6ca4d1b4b158cf41ff1ac719699fa57d2b3109d79ac3eea632728b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721239125014872368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.kubernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47,PodSandboxId:34e758884b69c4fce261741223b7e26143eb367fa6ed14938c7eb87c5afea287,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721239105888131919,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb
7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d,PodSandboxId:3dc429606d85505f6988a036253115b79a69058203d271847a8a06b8eee06c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721239105807182680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations
:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a,PodSandboxId:bec9243ce95ed60e667350a889a7f4b3b9a0523ee5a4261a31fea8492a8cb0dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721239105832036373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f,PodSandboxId:cef1c1c90ad7bcd6131429de1911c31541baccb72239ea5517e9b3d46d6ca94a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721239105812608924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map
[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0d206fb-1d1a-4cac-9539-e38af171279a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4715a46e2baec       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   8a810590d3716       busybox-fc5497c4f-pkq4s
	d59bab707068c       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      About a minute ago   Running             kindnet-cni               1                   43ad8250f0304       kindnet-r7gm7
	cf7a6123dfc52       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   3334be3daed44       coredns-7db6d8ff4d-qmclk
	126acaa753011       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   130d890a0a2f2       storage-provisioner
	a600dc9dc4cbf       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      About a minute ago   Running             kube-proxy                1                   f5a233028b6db       kube-proxy-tp9f2
	ee9b4a09a89d9       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      About a minute ago   Running             kube-scheduler            1                   35a360b430870       kube-scheduler-multinode-866205
	10e8e3edcb645       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      About a minute ago   Running             kube-controller-manager   1                   eddf0dc388923       kube-controller-manager-multinode-866205
	f771f4846c09f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   8d8ee5895be8c       etcd-multinode-866205
	85689b761c083       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      About a minute ago   Running             kube-apiserver            1                   0d667fdc790e4       kube-apiserver-multinode-866205
	036b3403e7071       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   f8e0d93c1dfef       busybox-fc5497c4f-pkq4s
	4d6289bad2649       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   64f0544b273e8       coredns-7db6d8ff4d-qmclk
	1815402f04f91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   4340bcf9e6fc2       storage-provisioner
	6a18586244f14       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    8 minutes ago        Exited              kindnet-cni               0                   af144f702171c       kindnet-r7gm7
	53d93ab94e35d       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      8 minutes ago        Exited              kube-proxy                0                   699ce739fe6ca       kube-proxy-tp9f2
	390153e91db47       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   34e758884b69c       etcd-multinode-866205
	bf1f3ab84c4d1       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      8 minutes ago        Exited              kube-scheduler            0                   bec9243ce95ed       kube-scheduler-multinode-866205
	5348c0dad6a9d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      8 minutes ago        Exited              kube-controller-manager   0                   cef1c1c90ad7b       kube-controller-manager-multinode-866205
	768cb64a493ab       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      8 minutes ago        Exited              kube-apiserver            0                   3dc429606d855       kube-apiserver-multinode-866205
	
	
	==> coredns [4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08] <==
	[INFO] 10.244.0.3:49209 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001705136s
	[INFO] 10.244.0.3:57868 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103651s
	[INFO] 10.244.0.3:45699 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101576s
	[INFO] 10.244.0.3:41906 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00102114s
	[INFO] 10.244.0.3:51373 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061532s
	[INFO] 10.244.0.3:48020 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063962s
	[INFO] 10.244.0.3:55300 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068727s
	[INFO] 10.244.1.2:55232 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168559s
	[INFO] 10.244.1.2:48518 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121748s
	[INFO] 10.244.1.2:51994 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090624s
	[INFO] 10.244.1.2:52302 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093904s
	[INFO] 10.244.0.3:45649 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115979s
	[INFO] 10.244.0.3:49954 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069946s
	[INFO] 10.244.0.3:40368 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066178s
	[INFO] 10.244.0.3:43249 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083478s
	[INFO] 10.244.1.2:59587 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143079s
	[INFO] 10.244.1.2:59246 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015225s
	[INFO] 10.244.1.2:45237 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000150468s
	[INFO] 10.244.1.2:55372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172366s
	[INFO] 10.244.0.3:47268 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107413s
	[INFO] 10.244.0.3:34574 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153236s
	[INFO] 10.244.0.3:58321 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000049605s
	[INFO] 10.244.0.3:51214 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056821s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cf7a6123dfc52831b70a4b7ab26667dc5dbfd3b6224dced4120c793504007930] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46587 - 62762 "HINFO IN 9208503358798563584.6106408047153594360. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023371479s
	
	
	==> describe nodes <==
	Name:               multinode-866205
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-866205
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=multinode-866205
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T17_58_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:58:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-866205
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:06:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:05:17 +0000   Wed, 17 Jul 2024 17:58:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:05:17 +0000   Wed, 17 Jul 2024 17:58:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:05:17 +0000   Wed, 17 Jul 2024 17:58:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:05:17 +0000   Wed, 17 Jul 2024 17:58:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    multinode-866205
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64374630be4d4569b107ad30571f6123
	  System UUID:                64374630-be4d-4569-b107-ad30571f6123
	  Boot ID:                    4ba17509-dbc9-4811-8fcc-26405b310e79
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pkq4s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	  kube-system                 coredns-7db6d8ff4d-qmclk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m11s
	  kube-system                 etcd-multinode-866205                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m25s
	  kube-system                 kindnet-r7gm7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m12s
	  kube-system                 kube-apiserver-multinode-866205             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 kube-controller-manager-multinode-866205    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 kube-proxy-tp9f2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m12s
	  kube-system                 kube-scheduler-multinode-866205             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m9s                   kube-proxy       
	  Normal  Starting                 95s                    kube-proxy       
	  Normal  NodeAllocatableEnforced  8m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m25s (x2 over 8m25s)  kubelet          Node multinode-866205 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m25s (x2 over 8m25s)  kubelet          Node multinode-866205 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m25s (x2 over 8m25s)  kubelet          Node multinode-866205 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m25s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m12s                  node-controller  Node multinode-866205 event: Registered Node multinode-866205 in Controller
	  Normal  NodeReady                7m56s                  kubelet          Node multinode-866205 status is now: NodeReady
	  Normal  Starting                 102s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)    kubelet          Node multinode-866205 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)    kubelet          Node multinode-866205 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x7 over 102s)    kubelet          Node multinode-866205 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           85s                    node-controller  Node multinode-866205 event: Registered Node multinode-866205 in Controller
	
	
	Name:               multinode-866205-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-866205-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=multinode-866205
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_05_56_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:05:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-866205-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:06:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:06:26 +0000   Wed, 17 Jul 2024 18:05:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:06:26 +0000   Wed, 17 Jul 2024 18:05:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:06:26 +0000   Wed, 17 Jul 2024 18:05:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:06:26 +0000   Wed, 17 Jul 2024 18:06:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    multinode-866205-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 968982e915f44dbb99c84c4f9e1ee63f
	  System UUID:                968982e9-15f4-4dbb-99c8-4c4f9e1ee63f
	  Boot ID:                    8856b435-38eb-4549-82d6-23623f5fb96f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bs4fx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kindnet-fwnkd              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m29s
	  kube-system                 kube-proxy-sq4xn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 55s                    kube-proxy       
	  Normal  Starting                 7m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m29s (x2 over 7m29s)  kubelet          Node multinode-866205-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s (x2 over 7m29s)  kubelet          Node multinode-866205-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m29s (x2 over 7m29s)  kubelet          Node multinode-866205-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m8s                   kubelet          Node multinode-866205-m02 status is now: NodeReady
	  Normal  Starting                 60s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet          Node multinode-866205-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet          Node multinode-866205-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet          Node multinode-866205-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           55s                    node-controller  Node multinode-866205-m02 event: Registered Node multinode-866205-m02 in Controller
	  Normal  NodeReady                41s                    kubelet          Node multinode-866205-m02 status is now: NodeReady
	
	
	Name:               multinode-866205-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-866205-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=multinode-866205
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_06_34_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:06:33 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-866205-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:06:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:06:52 +0000   Wed, 17 Jul 2024 18:06:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:06:52 +0000   Wed, 17 Jul 2024 18:06:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:06:52 +0000   Wed, 17 Jul 2024 18:06:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:06:52 +0000   Wed, 17 Jul 2024 18:06:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    multinode-866205-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4cca7b88806b46e99614cab69e32ed22
	  System UUID:                4cca7b88-806b-46e9-9614-cab69e32ed22
	  Boot ID:                    4620e70a-6b3e-43c6-9144-d2c264fe5aeb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-54x54       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m35s
	  kube-system                 kube-proxy-sgnbd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m30s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m41s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m35s (x2 over 6m35s)  kubelet     Node multinode-866205-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x2 over 6m35s)  kubelet     Node multinode-866205-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x2 over 6m35s)  kubelet     Node multinode-866205-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m15s                  kubelet     Node multinode-866205-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet     Node multinode-866205-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet     Node multinode-866205-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet     Node multinode-866205-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m27s                  kubelet     Node multinode-866205-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-866205-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-866205-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-866205-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-866205-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.061087] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.172999] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.108411] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.253097] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.780993] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +5.517334] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.054400] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.496148] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.076667] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.356585] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.709909] systemd-fstab-generator[1474]: Ignoring "noauto" option for root device
	[Jul17 17:59] kauditd_printk_skb: 60 callbacks suppressed
	[ +50.020259] kauditd_printk_skb: 12 callbacks suppressed
	[Jul17 18:05] systemd-fstab-generator[2785]: Ignoring "noauto" option for root device
	[  +0.143886] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.160706] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.135850] systemd-fstab-generator[2823]: Ignoring "noauto" option for root device
	[  +0.262632] systemd-fstab-generator[2851]: Ignoring "noauto" option for root device
	[  +6.050344] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[  +0.078991] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.693520] systemd-fstab-generator[3074]: Ignoring "noauto" option for root device
	[  +5.636976] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.799315] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.522212] systemd-fstab-generator[3909]: Ignoring "noauto" option for root device
	[ +21.443514] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47] <==
	{"level":"warn","ts":"2024-07-17T17:59:35.402357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.847797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T17:59:35.402627Z","caller":"traceutil/trace.go:171","msg":"trace[758796295] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:489; }","duration":"216.189848ms","start":"2024-07-17T17:59:35.186425Z","end":"2024-07-17T17:59:35.402614Z","steps":["trace[758796295] 'count revisions from in-memory index tree'  (duration: 215.751372ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:59:35.531545Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.101749ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1163213766002895341 > lease_revoke:<id:102490c1d8726951>","response":"size:28"}
	{"level":"info","ts":"2024-07-17T17:59:35.531619Z","caller":"traceutil/trace.go:171","msg":"trace[1308881342] linearizableReadLoop","detail":"{readStateIndex:516; appliedIndex:515; }","duration":"258.125534ms","start":"2024-07-17T17:59:35.273481Z","end":"2024-07-17T17:59:35.531606Z","steps":["trace[1308881342] 'read index received'  (duration: 65.933885ms)","trace[1308881342] 'applied index is now lower than readState.Index'  (duration: 192.190641ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T17:59:35.531713Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.241194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-866205-m02\" ","response":"range_response_count:1 size:3023"}
	{"level":"info","ts":"2024-07-17T17:59:35.531747Z","caller":"traceutil/trace.go:171","msg":"trace[1242444334] range","detail":"{range_begin:/registry/minions/multinode-866205-m02; range_end:; response_count:1; response_revision:489; }","duration":"258.29749ms","start":"2024-07-17T17:59:35.273441Z","end":"2024-07-17T17:59:35.531738Z","steps":["trace[1242444334] 'agreement among raft nodes before linearized reading'  (duration: 258.23102ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:59:35.531817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.119546ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-17T17:59:35.533117Z","caller":"traceutil/trace.go:171","msg":"trace[1973191743] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:489; }","duration":"118.441402ms","start":"2024-07-17T17:59:35.414659Z","end":"2024-07-17T17:59:35.533101Z","steps":["trace[1973191743] 'agreement among raft nodes before linearized reading'  (duration: 117.057126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:00:20.365912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.815402ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1163213766002895676 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-866205-m03.17e311f37e69d9b0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-866205-m03.17e311f37e69d9b0\" value_size:646 lease:1163213766002895309 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T18:00:20.366182Z","caller":"traceutil/trace.go:171","msg":"trace[602514586] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"165.226418ms","start":"2024-07-17T18:00:20.20094Z","end":"2024-07-17T18:00:20.366166Z","steps":["trace[602514586] 'process raft request'  (duration: 165.170879ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:00:20.366356Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.110539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-866205-m03\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-07-17T18:00:20.366395Z","caller":"traceutil/trace.go:171","msg":"trace[612566261] range","detail":"{range_begin:/registry/minions/multinode-866205-m03; range_end:; response_count:1; response_revision:580; }","duration":"193.239152ms","start":"2024-07-17T18:00:20.173149Z","end":"2024-07-17T18:00:20.366388Z","steps":["trace[612566261] 'agreement among raft nodes before linearized reading'  (duration: 193.08749ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:00:20.36617Z","caller":"traceutil/trace.go:171","msg":"trace[568527838] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"241.648273ms","start":"2024-07-17T18:00:20.124493Z","end":"2024-07-17T18:00:20.366141Z","steps":["trace[568527838] 'process raft request'  (duration: 86.489137ms)","trace[568527838] 'compare'  (duration: 154.710804ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T18:00:20.366225Z","caller":"traceutil/trace.go:171","msg":"trace[563045572] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:614; }","duration":"193.039397ms","start":"2024-07-17T18:00:20.17318Z","end":"2024-07-17T18:00:20.366219Z","steps":["trace[563045572] 'read index received'  (duration: 37.810691ms)","trace[563045572] 'applied index is now lower than readState.Index'  (duration: 155.227949ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T18:01:13.861061Z","caller":"traceutil/trace.go:171","msg":"trace[652025700] transaction","detail":"{read_only:false; response_revision:706; number_of_response:1; }","duration":"167.412828ms","start":"2024-07-17T18:01:13.693629Z","end":"2024-07-17T18:01:13.861042Z","steps":["trace[652025700] 'process raft request'  (duration: 167.292205ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:03:33.686853Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-17T18:03:33.686953Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-866205","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.16:2380"],"advertise-client-urls":["https://192.168.39.16:2379"]}
	{"level":"warn","ts":"2024-07-17T18:03:33.687071Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:03:33.687192Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:03:33.733168Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.16:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:03:33.733253Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.16:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T18:03:33.734292Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b6c76b3131c1024","current-leader-member-id":"b6c76b3131c1024"}
	{"level":"info","ts":"2024-07-17T18:03:33.73686Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-17T18:03:33.737032Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-17T18:03:33.737093Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-866205","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.16:2380"],"advertise-client-urls":["https://192.168.39.16:2379"]}
	
	
	==> etcd [f771f4846c09fc27b0dda60952111f83f446d6df7eaf2a8998a5a20c2489aa45] <==
	{"level":"info","ts":"2024-07-17T18:05:14.66944Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T18:05:14.669597Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T18:05:14.675909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 switched to configuration voters=(823163343393787940)"}
	{"level":"info","ts":"2024-07-17T18:05:14.676299Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cad58bbf0f3daddf","local-member-id":"b6c76b3131c1024","added-peer-id":"b6c76b3131c1024","added-peer-peer-urls":["https://192.168.39.16:2380"]}
	{"level":"info","ts":"2024-07-17T18:05:14.677686Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cad58bbf0f3daddf","local-member-id":"b6c76b3131c1024","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:05:14.677758Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:05:14.693771Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T18:05:14.693925Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-17T18:05:14.694601Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-17T18:05:14.696046Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b6c76b3131c1024","initial-advertise-peer-urls":["https://192.168.39.16:2380"],"listen-peer-urls":["https://192.168.39.16:2380"],"advertise-client-urls":["https://192.168.39.16:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.16:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T18:05:14.696222Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T18:05:16.528998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T18:05:16.529128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:05:16.529178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-17T18:05:16.529221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T18:05:16.529245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgVoteResp from b6c76b3131c1024 at term 3"}
	{"level":"info","ts":"2024-07-17T18:05:16.529277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T18:05:16.529341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b6c76b3131c1024 elected leader b6c76b3131c1024 at term 3"}
	{"level":"info","ts":"2024-07-17T18:05:16.534773Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:05:16.534844Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T18:05:16.534881Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:05:16.534615Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b6c76b3131c1024","local-member-attributes":"{Name:multinode-866205 ClientURLs:[https://192.168.39.16:2379]}","request-path":"/0/members/b6c76b3131c1024/attributes","cluster-id":"cad58bbf0f3daddf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:05:16.535403Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:05:16.536967Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.16:2379"}
	{"level":"info","ts":"2024-07-17T18:05:16.537294Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:06:55 up 8 min,  0 users,  load average: 0.16, 0.13, 0.07
	Linux multinode-866205 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef] <==
	I0717 18:02:49.477015       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:02:59.476762       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:02:59.476809       1 main.go:303] handling current node
	I0717 18:02:59.476828       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:02:59.476833       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:02:59.476993       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:02:59.477018       1 main.go:326] Node multinode-866205-m03 has CIDR [10.244.3.0/24] 
	I0717 18:03:09.486030       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:03:09.486076       1 main.go:303] handling current node
	I0717 18:03:09.486096       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:03:09.486101       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:03:09.486246       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:03:09.486353       1 main.go:326] Node multinode-866205-m03 has CIDR [10.244.3.0/24] 
	I0717 18:03:19.485455       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:03:19.485565       1 main.go:303] handling current node
	I0717 18:03:19.485601       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:03:19.485620       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:03:19.485759       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:03:19.485782       1 main.go:326] Node multinode-866205-m03 has CIDR [10.244.3.0/24] 
	I0717 18:03:29.481021       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:03:29.481201       1 main.go:303] handling current node
	I0717 18:03:29.481269       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:03:29.481296       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:03:29.481573       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:03:29.481600       1 main.go:326] Node multinode-866205-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d59bab707068cccd4ff807dfb4cbe1c9164ce49d80ecdb3334e035447c895132] <==
	I0717 18:06:10.193001       1 main.go:326] Node multinode-866205-m03 has CIDR [10.244.3.0/24] 
	I0717 18:06:20.190621       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:06:20.190688       1 main.go:303] handling current node
	I0717 18:06:20.190702       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:06:20.190708       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:06:20.190864       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:06:20.190886       1 main.go:326] Node multinode-866205-m03 has CIDR [10.244.3.0/24] 
	I0717 18:06:30.192397       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:06:30.192481       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:06:30.192853       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:06:30.192877       1 main.go:326] Node multinode-866205-m03 has CIDR [10.244.3.0/24] 
	I0717 18:06:30.192937       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:06:30.192955       1 main.go:303] handling current node
	I0717 18:06:40.189646       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:06:40.189762       1 main.go:303] handling current node
	I0717 18:06:40.189790       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:06:40.189808       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:06:40.190002       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:06:40.190053       1 main.go:326] Node multinode-866205-m03 has CIDR [10.244.2.0/24] 
	I0717 18:06:50.190095       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:06:50.190217       1 main.go:303] handling current node
	I0717 18:06:50.190256       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:06:50.190276       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:06:50.190502       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:06:50.190542       1 main.go:326] Node multinode-866205-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d] <==
	W0717 18:03:33.723380       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723425       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723456       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723504       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723548       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723570       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723617       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723659       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723707       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723729       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723781       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723817       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723857       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723902       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723935       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723828       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.724015       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.724075       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723902       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723712       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723552       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723439       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723917       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723978       1 logging.go:59] [core] [Channel #5 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.724174       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [85689b761c08356d8c72ddbb6741d7811846bc176e5620e1f16292d2405380d2] <==
	I0717 18:05:17.843102       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 18:05:17.843634       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 18:05:17.843690       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 18:05:17.843822       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 18:05:17.844561       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 18:05:17.844714       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 18:05:17.844790       1 aggregator.go:165] initial CRD sync complete...
	I0717 18:05:17.844830       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 18:05:17.844852       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 18:05:17.844874       1 cache.go:39] Caches are synced for autoregister controller
	I0717 18:05:17.844971       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 18:05:17.849759       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0717 18:05:17.851879       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0717 18:05:17.908001       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 18:05:17.914540       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 18:05:17.914571       1 policy_source.go:224] refreshing policies
	I0717 18:05:17.991010       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 18:05:18.745703       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 18:05:20.133012       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 18:05:20.245220       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 18:05:20.261031       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 18:05:20.330278       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 18:05:20.336293       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 18:05:30.685967       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 18:05:30.768857       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [10e8e3edcb645f3c0ff2b2960f9eec7a22f72853f96d97b2a2f1a60774be4ecd] <==
	I0717 18:05:30.811108       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 18:05:31.226810       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 18:05:31.272379       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 18:05:31.272429       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 18:05:51.557795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.920537ms"
	I0717 18:05:51.566213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.268778ms"
	I0717 18:05:51.575632       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.303534ms"
	I0717 18:05:51.575793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.865µs"
	I0717 18:05:55.808943       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-866205-m02\" does not exist"
	I0717 18:05:55.820684       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-866205-m02" podCIDRs=["10.244.1.0/24"]
	I0717 18:05:56.647009       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.506µs"
	I0717 18:05:57.774638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.923µs"
	I0717 18:05:57.780044       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.408µs"
	I0717 18:05:57.785356       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.355µs"
	I0717 18:05:57.787350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.489µs"
	I0717 18:06:14.633782       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:06:14.653875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.589µs"
	I0717 18:06:14.666408       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.049µs"
	I0717 18:06:18.191570       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.125477ms"
	I0717 18:06:18.191829       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.657µs"
	I0717 18:06:32.473403       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:06:33.519797       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:06:33.519917       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-866205-m03\" does not exist"
	I0717 18:06:33.537150       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-866205-m03" podCIDRs=["10.244.2.0/24"]
	I0717 18:06:52.338980       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m03"
	
	
	==> kube-controller-manager [5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f] <==
	I0717 17:59:03.555927       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 17:59:26.774994       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-866205-m02\" does not exist"
	I0717 17:59:26.789646       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-866205-m02" podCIDRs=["10.244.1.0/24"]
	I0717 17:59:28.560105       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-866205-m02"
	I0717 17:59:47.364360       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 17:59:49.826001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.693694ms"
	I0717 17:59:49.838498       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.214519ms"
	I0717 17:59:49.838580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.031µs"
	I0717 17:59:53.004082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.085633ms"
	I0717 17:59:53.004474       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.202µs"
	I0717 17:59:53.583170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.493917ms"
	I0717 17:59:53.583254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.96µs"
	I0717 18:00:20.370944       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-866205-m03\" does not exist"
	I0717 18:00:20.371066       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:00:20.437396       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-866205-m03" podCIDRs=["10.244.2.0/24"]
	I0717 18:00:23.602026       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-866205-m03"
	I0717 18:00:40.674287       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:01:08.587553       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:01:09.539448       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-866205-m03\" does not exist"
	I0717 18:01:09.540162       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:01:09.551251       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-866205-m03" podCIDRs=["10.244.3.0/24"]
	I0717 18:01:28.466433       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:02:13.651645       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:02:13.716849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.40575ms"
	I0717 18:02:13.717077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.81µs"
	
	
	==> kube-proxy [53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106] <==
	I0717 17:58:45.289683       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:58:45.299664       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0717 17:58:45.447485       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:58:45.447537       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:58:45.447553       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:58:45.454761       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:58:45.454986       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:58:45.455016       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:58:45.456797       1 config.go:192] "Starting service config controller"
	I0717 17:58:45.456819       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:58:45.456863       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:58:45.456867       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:58:45.457278       1 config.go:319] "Starting node config controller"
	I0717 17:58:45.457342       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:58:45.557766       1 shared_informer.go:320] Caches are synced for node config
	I0717 17:58:45.557809       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:58:45.557848       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a600dc9dc4cbfe810d7bfdddf4001a2a6835b4f561ddee8ec89a3b97c0781e7e] <==
	I0717 18:05:19.379216       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:05:19.405997       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0717 18:05:19.465214       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:05:19.465264       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:05:19.465280       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:05:19.467861       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:05:19.468090       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:05:19.468110       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:05:19.470483       1 config.go:192] "Starting service config controller"
	I0717 18:05:19.470551       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:05:19.470595       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:05:19.470611       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:05:19.471111       1 config.go:319] "Starting node config controller"
	I0717 18:05:19.471136       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:05:19.571070       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:05:19.571258       1 shared_informer.go:320] Caches are synced for node config
	I0717 18:05:19.571424       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a] <==
	E0717 17:58:28.291576       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 17:58:28.290811       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:58:28.291624       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 17:58:28.290881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 17:58:28.291670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 17:58:28.291018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 17:58:28.291739       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 17:58:28.291032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 17:58:28.291791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 17:58:28.291146       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 17:58:28.291838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 17:58:29.229248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:58:29.229427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:58:29.290808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:58:29.290918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:58:29.292966       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 17:58:29.293066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 17:58:29.381218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 17:58:29.381362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 17:58:29.454791       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 17:58:29.454902       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 17:58:29.471086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 17:58:29.471130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0717 17:58:29.886522       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 18:03:33.698671       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ee9b4a09a89d9644c160bc53340de6d77de564f71b2afa786f3a582fdabfda56] <==
	W0717 18:05:17.817050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:05:17.817059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 18:05:17.817128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:05:17.817151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 18:05:17.817204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:05:17.817213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:05:17.817272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:05:17.817295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:05:17.817399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:05:17.817423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 18:05:17.817480       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:05:17.817503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 18:05:17.817556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:05:17.817578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:05:17.817693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:05:17.817715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:05:17.817787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:05:17.817809       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:05:17.817861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:05:17.817885       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 18:05:17.817934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:05:17.817956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:05:17.817966       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:05:17.817971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0717 18:05:18.807376       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 18:05:14 multinode-866205 kubelet[3081]: E0717 18:05:14.276065    3081 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.16:8443: connect: connection refused" node="multinode-866205"
	Jul 17 18:05:15 multinode-866205 kubelet[3081]: I0717 18:05:15.078186    3081 kubelet_node_status.go:73] "Attempting to register node" node="multinode-866205"
	Jul 17 18:05:17 multinode-866205 kubelet[3081]: I0717 18:05:17.982962    3081 kubelet_node_status.go:112] "Node was previously registered" node="multinode-866205"
	Jul 17 18:05:17 multinode-866205 kubelet[3081]: I0717 18:05:17.983587    3081 kubelet_node_status.go:76] "Successfully registered node" node="multinode-866205"
	Jul 17 18:05:17 multinode-866205 kubelet[3081]: I0717 18:05:17.985503    3081 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 18:05:17 multinode-866205 kubelet[3081]: I0717 18:05:17.986910    3081 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.549363    3081 apiserver.go:52] "Watching apiserver"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.552604    3081 topology_manager.go:215] "Topology Admit Handler" podUID="59db5a4d-7403-430d-af09-5a42d354c16c" podNamespace="kube-system" podName="kindnet-r7gm7"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.552848    3081 topology_manager.go:215] "Topology Admit Handler" podUID="4463bfd0-32aa-4f9a-9012-09c438fa3629" podNamespace="kube-system" podName="kube-proxy-tp9f2"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.552942    3081 topology_manager.go:215] "Topology Admit Handler" podUID="2f2998e9-2aa3-4640-81e5-96bdadc07c15" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qmclk"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.553014    3081 topology_manager.go:215] "Topology Admit Handler" podUID="700ef325-89ff-4051-a800-83e11439fcfb" podNamespace="kube-system" podName="storage-provisioner"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.553098    3081 topology_manager.go:215] "Topology Admit Handler" podUID="505e4353-4f57-49a2-b738-3a4a6393867a" podNamespace="default" podName="busybox-fc5497c4f-pkq4s"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.566118    3081 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.593924    3081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4463bfd0-32aa-4f9a-9012-09c438fa3629-lib-modules\") pod \"kube-proxy-tp9f2\" (UID: \"4463bfd0-32aa-4f9a-9012-09c438fa3629\") " pod="kube-system/kube-proxy-tp9f2"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.594090    3081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59db5a4d-7403-430d-af09-5a42d354c16c-lib-modules\") pod \"kindnet-r7gm7\" (UID: \"59db5a4d-7403-430d-af09-5a42d354c16c\") " pod="kube-system/kindnet-r7gm7"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.594155    3081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4463bfd0-32aa-4f9a-9012-09c438fa3629-xtables-lock\") pod \"kube-proxy-tp9f2\" (UID: \"4463bfd0-32aa-4f9a-9012-09c438fa3629\") " pod="kube-system/kube-proxy-tp9f2"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.594204    3081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/700ef325-89ff-4051-a800-83e11439fcfb-tmp\") pod \"storage-provisioner\" (UID: \"700ef325-89ff-4051-a800-83e11439fcfb\") " pod="kube-system/storage-provisioner"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.594271    3081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/59db5a4d-7403-430d-af09-5a42d354c16c-cni-cfg\") pod \"kindnet-r7gm7\" (UID: \"59db5a4d-7403-430d-af09-5a42d354c16c\") " pod="kube-system/kindnet-r7gm7"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.594386    3081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59db5a4d-7403-430d-af09-5a42d354c16c-xtables-lock\") pod \"kindnet-r7gm7\" (UID: \"59db5a4d-7403-430d-af09-5a42d354c16c\") " pod="kube-system/kindnet-r7gm7"
	Jul 17 18:05:22 multinode-866205 kubelet[3081]: I0717 18:05:22.857250    3081 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 17 18:06:13 multinode-866205 kubelet[3081]: E0717 18:06:13.631667    3081 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:06:13 multinode-866205 kubelet[3081]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:06:13 multinode-866205 kubelet[3081]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:06:13 multinode-866205 kubelet[3081]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:06:13 multinode-866205 kubelet[3081]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:06:54.696537   52006 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19283-14386/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-866205 -n multinode-866205
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-866205 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (325.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 stop
E0717 18:08:21.396482   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-866205 stop: exit status 82 (2m0.460708038s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-866205-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-866205 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-866205 status: exit status 3 (18.699257288s)

                                                
                                                
-- stdout --
	multinode-866205
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-866205-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:09:17.793267   52666 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.113:22: connect: no route to host
	E0717 18:09:17.793298   52666 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.113:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-866205 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-866205 -n multinode-866205
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-866205 logs -n 25: (1.375242195s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-866205 cp multinode-866205-m02:/home/docker/cp-test.txt                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205:/home/docker/cp-test_multinode-866205-m02_multinode-866205.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n multinode-866205 sudo cat                                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | /home/docker/cp-test_multinode-866205-m02_multinode-866205.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-866205 cp multinode-866205-m02:/home/docker/cp-test.txt                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m03:/home/docker/cp-test_multinode-866205-m02_multinode-866205-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n multinode-866205-m03 sudo cat                                   | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | /home/docker/cp-test_multinode-866205-m02_multinode-866205-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-866205 cp testdata/cp-test.txt                                                | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-866205 cp multinode-866205-m03:/home/docker/cp-test.txt                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1415765283/001/cp-test_multinode-866205-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-866205 cp multinode-866205-m03:/home/docker/cp-test.txt                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205:/home/docker/cp-test_multinode-866205-m03_multinode-866205.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n multinode-866205 sudo cat                                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | /home/docker/cp-test_multinode-866205-m03_multinode-866205.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-866205 cp multinode-866205-m03:/home/docker/cp-test.txt                       | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m02:/home/docker/cp-test_multinode-866205-m03_multinode-866205-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n                                                                 | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | multinode-866205-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-866205 ssh -n multinode-866205-m02 sudo cat                                   | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	|         | /home/docker/cp-test_multinode-866205-m03_multinode-866205-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-866205 node stop m03                                                          | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:00 UTC |
	| node    | multinode-866205 node start                                                             | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:00 UTC | 17 Jul 24 18:01 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-866205                                                                | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:01 UTC |                     |
	| stop    | -p multinode-866205                                                                     | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:01 UTC |                     |
	| start   | -p multinode-866205                                                                     | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:03 UTC | 17 Jul 24 18:06 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-866205                                                                | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:06 UTC |                     |
	| node    | multinode-866205 node delete                                                            | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:06 UTC | 17 Jul 24 18:06 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-866205 stop                                                                   | multinode-866205 | jenkins | v1.33.1 | 17 Jul 24 18:06 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:03:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:03:32.846683   50854 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:03:32.846928   50854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:03:32.846936   50854 out.go:304] Setting ErrFile to fd 2...
	I0717 18:03:32.846940   50854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:03:32.847120   50854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:03:32.847622   50854 out.go:298] Setting JSON to false
	I0717 18:03:32.848505   50854 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6356,"bootTime":1721233057,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:03:32.848566   50854 start.go:139] virtualization: kvm guest
	I0717 18:03:32.850891   50854 out.go:177] * [multinode-866205] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:03:32.852331   50854 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:03:32.852349   50854 notify.go:220] Checking for updates...
	I0717 18:03:32.854843   50854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:03:32.856079   50854 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:03:32.857102   50854 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:03:32.858358   50854 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:03:32.859592   50854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:03:32.861195   50854 config.go:182] Loaded profile config "multinode-866205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:03:32.861290   50854 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:03:32.861710   50854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:03:32.861753   50854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:03:32.877874   50854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40375
	I0717 18:03:32.878350   50854 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:03:32.878957   50854 main.go:141] libmachine: Using API Version  1
	I0717 18:03:32.878977   50854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:03:32.879275   50854 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:03:32.879446   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:03:32.914763   50854 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:03:32.916013   50854 start.go:297] selected driver: kvm2
	I0717 18:03:32.916027   50854 start.go:901] validating driver "kvm2" against &{Name:multinode-866205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.2 ClusterName:multinode-866205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:03:32.916144   50854 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:03:32.916439   50854 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:03:32.916496   50854 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:03:32.930492   50854 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:03:32.931164   50854 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:03:32.931193   50854 cni.go:84] Creating CNI manager for ""
	I0717 18:03:32.931199   50854 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 18:03:32.931251   50854 start.go:340] cluster config:
	{Name:multinode-866205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:multinode-866205 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:03:32.931359   50854 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:03:32.933111   50854 out.go:177] * Starting "multinode-866205" primary control-plane node in "multinode-866205" cluster
	I0717 18:03:32.934404   50854 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:03:32.934440   50854 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:03:32.934449   50854 cache.go:56] Caching tarball of preloaded images
	I0717 18:03:32.934528   50854 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:03:32.934542   50854 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:03:32.934649   50854 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/config.json ...
	I0717 18:03:32.934829   50854 start.go:360] acquireMachinesLock for multinode-866205: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:03:32.934866   50854 start.go:364] duration metric: took 21.228µs to acquireMachinesLock for "multinode-866205"
	I0717 18:03:32.934884   50854 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:03:32.934891   50854 fix.go:54] fixHost starting: 
	I0717 18:03:32.935129   50854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:03:32.935163   50854 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:03:32.948991   50854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0717 18:03:32.949427   50854 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:03:32.949849   50854 main.go:141] libmachine: Using API Version  1
	I0717 18:03:32.949868   50854 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:03:32.950216   50854 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:03:32.950395   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:03:32.950558   50854 main.go:141] libmachine: (multinode-866205) Calling .GetState
	I0717 18:03:32.952038   50854 fix.go:112] recreateIfNeeded on multinode-866205: state=Running err=<nil>
	W0717 18:03:32.952053   50854 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:03:32.953768   50854 out.go:177] * Updating the running kvm2 "multinode-866205" VM ...
	I0717 18:03:32.954937   50854 machine.go:94] provisionDockerMachine start ...
	I0717 18:03:32.954953   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:03:32.955127   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:03:32.957394   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:32.957795   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:32.957817   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:32.957905   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:03:32.958060   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:32.958183   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:32.958312   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:03:32.958519   50854 main.go:141] libmachine: Using SSH client type: native
	I0717 18:03:32.958765   50854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0717 18:03:32.958780   50854 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:03:33.061762   50854 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-866205
	
	I0717 18:03:33.061783   50854 main.go:141] libmachine: (multinode-866205) Calling .GetMachineName
	I0717 18:03:33.062012   50854 buildroot.go:166] provisioning hostname "multinode-866205"
	I0717 18:03:33.062035   50854 main.go:141] libmachine: (multinode-866205) Calling .GetMachineName
	I0717 18:03:33.062245   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:03:33.064872   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.065203   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:33.065228   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.065390   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:03:33.065569   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.065737   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.065874   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:03:33.066019   50854 main.go:141] libmachine: Using SSH client type: native
	I0717 18:03:33.066218   50854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0717 18:03:33.066235   50854 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-866205 && echo "multinode-866205" | sudo tee /etc/hostname
	I0717 18:03:33.180298   50854 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-866205
	
	I0717 18:03:33.180344   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:03:33.183406   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.183869   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:33.183913   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.184097   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:03:33.184284   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.184436   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.184587   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:03:33.184786   50854 main.go:141] libmachine: Using SSH client type: native
	I0717 18:03:33.184981   50854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0717 18:03:33.185005   50854 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-866205' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-866205/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-866205' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:03:33.285561   50854 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:03:33.285596   50854 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:03:33.285615   50854 buildroot.go:174] setting up certificates
	I0717 18:03:33.285624   50854 provision.go:84] configureAuth start
	I0717 18:03:33.285633   50854 main.go:141] libmachine: (multinode-866205) Calling .GetMachineName
	I0717 18:03:33.286003   50854 main.go:141] libmachine: (multinode-866205) Calling .GetIP
	I0717 18:03:33.289046   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.289378   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:33.289398   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.289543   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:03:33.291689   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.292053   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:33.292079   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.292199   50854 provision.go:143] copyHostCerts
	I0717 18:03:33.292234   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:03:33.292269   50854 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:03:33.292282   50854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:03:33.292348   50854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:03:33.292440   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:03:33.292457   50854 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:03:33.292464   50854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:03:33.292487   50854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:03:33.292530   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:03:33.292545   50854 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:03:33.292557   50854 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:03:33.292587   50854 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:03:33.292629   50854 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.multinode-866205 san=[127.0.0.1 192.168.39.16 localhost minikube multinode-866205]
	I0717 18:03:33.425029   50854 provision.go:177] copyRemoteCerts
	I0717 18:03:33.425088   50854 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:03:33.425114   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:03:33.427637   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.427962   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:33.427984   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.428170   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:03:33.428365   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.428539   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:03:33.428686   50854 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/multinode-866205/id_rsa Username:docker}
	I0717 18:03:33.506665   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:03:33.506736   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:03:33.529793   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:03:33.529865   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 18:03:33.552889   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:03:33.552969   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:03:33.575857   50854 provision.go:87] duration metric: took 290.220074ms to configureAuth
	I0717 18:03:33.575889   50854 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:03:33.576096   50854 config.go:182] Loaded profile config "multinode-866205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:03:33.576163   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:03:33.578902   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.579261   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:03:33.579304   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:03:33.579535   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:03:33.579704   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.579885   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:03:33.580134   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:03:33.580344   50854 main.go:141] libmachine: Using SSH client type: native
	I0717 18:03:33.580541   50854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0717 18:03:33.580561   50854 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:05:04.232364   50854 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:05:04.232392   50854 machine.go:97] duration metric: took 1m31.277443119s to provisionDockerMachine
	I0717 18:05:04.232407   50854 start.go:293] postStartSetup for "multinode-866205" (driver="kvm2")
	I0717 18:05:04.232420   50854 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:05:04.232445   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:05:04.232742   50854 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:05:04.232768   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:05:04.235936   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.236331   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:05:04.236357   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.236476   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:05:04.236685   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:05:04.236854   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:05:04.237093   50854 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/multinode-866205/id_rsa Username:docker}
	I0717 18:05:04.315844   50854 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:05:04.319643   50854 command_runner.go:130] > NAME=Buildroot
	I0717 18:05:04.319665   50854 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0717 18:05:04.319671   50854 command_runner.go:130] > ID=buildroot
	I0717 18:05:04.319677   50854 command_runner.go:130] > VERSION_ID=2023.02.9
	I0717 18:05:04.319690   50854 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0717 18:05:04.319746   50854 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:05:04.319770   50854 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:05:04.319848   50854 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:05:04.319996   50854 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:05:04.320012   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /etc/ssl/certs/215772.pem
	I0717 18:05:04.320134   50854 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:05:04.329050   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:05:04.350949   50854 start.go:296] duration metric: took 118.52848ms for postStartSetup
	I0717 18:05:04.350995   50854 fix.go:56] duration metric: took 1m31.416102353s for fixHost
	I0717 18:05:04.351020   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:05:04.353635   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.353987   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:05:04.354023   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.354160   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:05:04.354366   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:05:04.354537   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:05:04.354663   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:05:04.354885   50854 main.go:141] libmachine: Using SSH client type: native
	I0717 18:05:04.355055   50854 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.16 22 <nil> <nil>}
	I0717 18:05:04.355067   50854 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:05:04.453197   50854 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721239504.425295517
	
	I0717 18:05:04.453222   50854 fix.go:216] guest clock: 1721239504.425295517
	I0717 18:05:04.453229   50854 fix.go:229] Guest: 2024-07-17 18:05:04.425295517 +0000 UTC Remote: 2024-07-17 18:05:04.351000553 +0000 UTC m=+91.537763001 (delta=74.294964ms)
	I0717 18:05:04.453246   50854 fix.go:200] guest clock delta is within tolerance: 74.294964ms
	I0717 18:05:04.453251   50854 start.go:83] releasing machines lock for "multinode-866205", held for 1m31.51837647s
	I0717 18:05:04.453268   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:05:04.453510   50854 main.go:141] libmachine: (multinode-866205) Calling .GetIP
	I0717 18:05:04.455802   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.456071   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:05:04.456102   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.456201   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:05:04.456681   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:05:04.456833   50854 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:05:04.456894   50854 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:05:04.456967   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:05:04.457091   50854 ssh_runner.go:195] Run: cat /version.json
	I0717 18:05:04.457116   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:05:04.459355   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.459639   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.459671   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:05:04.459692   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.459866   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:05:04.460032   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:05:04.460050   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:05:04.460069   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:04.460171   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:05:04.460260   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:05:04.460292   50854 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/multinode-866205/id_rsa Username:docker}
	I0717 18:05:04.460393   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:05:04.460554   50854 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:05:04.460683   50854 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/multinode-866205/id_rsa Username:docker}
	I0717 18:05:04.533733   50854 command_runner.go:130] > {"iso_version": "v1.33.1-1721146474-19264", "kicbase_version": "v0.0.44-1721064868-19249", "minikube_version": "v1.33.1", "commit": "6e0d7ef26437c947028f356d4449a323918e966e"}
	I0717 18:05:04.534102   50854 ssh_runner.go:195] Run: systemctl --version
	I0717 18:05:04.574027   50854 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 18:05:04.574592   50854 command_runner.go:130] > systemd 252 (252)
	I0717 18:05:04.574628   50854 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0717 18:05:04.574687   50854 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:05:04.728095   50854 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 18:05:04.735084   50854 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0717 18:05:04.735123   50854 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:05:04.735189   50854 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:05:04.743824   50854 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 18:05:04.743849   50854 start.go:495] detecting cgroup driver to use...
	I0717 18:05:04.743921   50854 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:05:04.759060   50854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:05:04.771897   50854 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:05:04.771946   50854 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:05:04.784463   50854 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:05:04.796772   50854 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:05:04.941031   50854 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:05:05.076071   50854 docker.go:233] disabling docker service ...
	I0717 18:05:05.076150   50854 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:05:05.091990   50854 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:05:05.104906   50854 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:05:05.237663   50854 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:05:05.377281   50854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:05:05.390355   50854 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:05:05.407510   50854 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 18:05:05.407560   50854 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:05:05.407610   50854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.417108   50854 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:05:05.417164   50854 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.426481   50854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.435752   50854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.444979   50854 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:05:05.454690   50854 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.464181   50854 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.474673   50854 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:05:05.484900   50854 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:05:05.493596   50854 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 18:05:05.493651   50854 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:05:05.501828   50854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:05:05.635967   50854 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:05:11.249442   50854 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.613435488s)
	I0717 18:05:11.249476   50854 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:05:11.249530   50854 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:05:11.254143   50854 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 18:05:11.254168   50854 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 18:05:11.254177   50854 command_runner.go:130] > Device: 0,22	Inode: 1340        Links: 1
	I0717 18:05:11.254186   50854 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 18:05:11.254194   50854 command_runner.go:130] > Access: 2024-07-17 18:05:11.125925713 +0000
	I0717 18:05:11.254205   50854 command_runner.go:130] > Modify: 2024-07-17 18:05:11.125925713 +0000
	I0717 18:05:11.254212   50854 command_runner.go:130] > Change: 2024-07-17 18:05:11.125925713 +0000
	I0717 18:05:11.254219   50854 command_runner.go:130] >  Birth: -
	I0717 18:05:11.254234   50854 start.go:563] Will wait 60s for crictl version
	I0717 18:05:11.254270   50854 ssh_runner.go:195] Run: which crictl
	I0717 18:05:11.257582   50854 command_runner.go:130] > /usr/bin/crictl
	I0717 18:05:11.257718   50854 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:05:11.294443   50854 command_runner.go:130] > Version:  0.1.0
	I0717 18:05:11.294465   50854 command_runner.go:130] > RuntimeName:  cri-o
	I0717 18:05:11.294472   50854 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0717 18:05:11.294480   50854 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 18:05:11.294554   50854 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:05:11.294628   50854 ssh_runner.go:195] Run: crio --version
	I0717 18:05:11.323179   50854 command_runner.go:130] > crio version 1.29.1
	I0717 18:05:11.323196   50854 command_runner.go:130] > Version:        1.29.1
	I0717 18:05:11.323202   50854 command_runner.go:130] > GitCommit:      unknown
	I0717 18:05:11.323206   50854 command_runner.go:130] > GitCommitDate:  unknown
	I0717 18:05:11.323210   50854 command_runner.go:130] > GitTreeState:   clean
	I0717 18:05:11.323228   50854 command_runner.go:130] > BuildDate:      2024-07-16T21:25:55Z
	I0717 18:05:11.323233   50854 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 18:05:11.323236   50854 command_runner.go:130] > Compiler:       gc
	I0717 18:05:11.323241   50854 command_runner.go:130] > Platform:       linux/amd64
	I0717 18:05:11.323245   50854 command_runner.go:130] > Linkmode:       dynamic
	I0717 18:05:11.323250   50854 command_runner.go:130] > BuildTags:      
	I0717 18:05:11.323254   50854 command_runner.go:130] >   containers_image_ostree_stub
	I0717 18:05:11.323259   50854 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 18:05:11.323266   50854 command_runner.go:130] >   btrfs_noversion
	I0717 18:05:11.323272   50854 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 18:05:11.323279   50854 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 18:05:11.323284   50854 command_runner.go:130] >   seccomp
	I0717 18:05:11.323290   50854 command_runner.go:130] > LDFlags:          unknown
	I0717 18:05:11.323296   50854 command_runner.go:130] > SeccompEnabled:   true
	I0717 18:05:11.323302   50854 command_runner.go:130] > AppArmorEnabled:  false
	I0717 18:05:11.323424   50854 ssh_runner.go:195] Run: crio --version
	I0717 18:05:11.348857   50854 command_runner.go:130] > crio version 1.29.1
	I0717 18:05:11.348878   50854 command_runner.go:130] > Version:        1.29.1
	I0717 18:05:11.348884   50854 command_runner.go:130] > GitCommit:      unknown
	I0717 18:05:11.348889   50854 command_runner.go:130] > GitCommitDate:  unknown
	I0717 18:05:11.348893   50854 command_runner.go:130] > GitTreeState:   clean
	I0717 18:05:11.348898   50854 command_runner.go:130] > BuildDate:      2024-07-16T21:25:55Z
	I0717 18:05:11.348903   50854 command_runner.go:130] > GoVersion:      go1.21.6
	I0717 18:05:11.348906   50854 command_runner.go:130] > Compiler:       gc
	I0717 18:05:11.348911   50854 command_runner.go:130] > Platform:       linux/amd64
	I0717 18:05:11.348916   50854 command_runner.go:130] > Linkmode:       dynamic
	I0717 18:05:11.348939   50854 command_runner.go:130] > BuildTags:      
	I0717 18:05:11.348962   50854 command_runner.go:130] >   containers_image_ostree_stub
	I0717 18:05:11.348969   50854 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0717 18:05:11.348976   50854 command_runner.go:130] >   btrfs_noversion
	I0717 18:05:11.348982   50854 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0717 18:05:11.348987   50854 command_runner.go:130] >   libdm_no_deferred_remove
	I0717 18:05:11.348991   50854 command_runner.go:130] >   seccomp
	I0717 18:05:11.348995   50854 command_runner.go:130] > LDFlags:          unknown
	I0717 18:05:11.349002   50854 command_runner.go:130] > SeccompEnabled:   true
	I0717 18:05:11.349007   50854 command_runner.go:130] > AppArmorEnabled:  false
	I0717 18:05:11.351827   50854 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:05:11.353186   50854 main.go:141] libmachine: (multinode-866205) Calling .GetIP
	I0717 18:05:11.355812   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:11.356127   50854 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:05:11.356149   50854 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:05:11.356303   50854 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:05:11.360206   50854 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0717 18:05:11.360291   50854 kubeadm.go:883] updating cluster {Name:multinode-866205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.2 ClusterName:multinode-866205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:05:11.360483   50854 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:05:11.360531   50854 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:05:11.404035   50854 command_runner.go:130] > {
	I0717 18:05:11.404050   50854 command_runner.go:130] >   "images": [
	I0717 18:05:11.404054   50854 command_runner.go:130] >     {
	I0717 18:05:11.404061   50854 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 18:05:11.404070   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404076   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 18:05:11.404079   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404083   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404091   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 18:05:11.404100   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 18:05:11.404106   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404111   50854 command_runner.go:130] >       "size": "65908273",
	I0717 18:05:11.404118   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.404125   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404133   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404142   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404145   50854 command_runner.go:130] >     },
	I0717 18:05:11.404150   50854 command_runner.go:130] >     {
	I0717 18:05:11.404157   50854 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0717 18:05:11.404163   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404168   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0717 18:05:11.404172   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404175   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404183   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0717 18:05:11.404194   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0717 18:05:11.404200   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404209   50854 command_runner.go:130] >       "size": "87165492",
	I0717 18:05:11.404214   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.404224   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404234   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404241   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404250   50854 command_runner.go:130] >     },
	I0717 18:05:11.404254   50854 command_runner.go:130] >     {
	I0717 18:05:11.404261   50854 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 18:05:11.404265   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404270   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 18:05:11.404274   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404278   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404288   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 18:05:11.404294   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 18:05:11.404299   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404304   50854 command_runner.go:130] >       "size": "1363676",
	I0717 18:05:11.404308   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.404312   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404318   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404322   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404325   50854 command_runner.go:130] >     },
	I0717 18:05:11.404328   50854 command_runner.go:130] >     {
	I0717 18:05:11.404334   50854 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 18:05:11.404340   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404345   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 18:05:11.404351   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404354   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404368   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 18:05:11.404381   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 18:05:11.404386   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404390   50854 command_runner.go:130] >       "size": "31470524",
	I0717 18:05:11.404394   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.404398   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404405   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404409   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404413   50854 command_runner.go:130] >     },
	I0717 18:05:11.404417   50854 command_runner.go:130] >     {
	I0717 18:05:11.404425   50854 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 18:05:11.404430   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404436   50854 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 18:05:11.404440   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404443   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404450   50854 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 18:05:11.404459   50854 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 18:05:11.404462   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404469   50854 command_runner.go:130] >       "size": "61245718",
	I0717 18:05:11.404472   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.404476   50854 command_runner.go:130] >       "username": "nonroot",
	I0717 18:05:11.404481   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404485   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404492   50854 command_runner.go:130] >     },
	I0717 18:05:11.404495   50854 command_runner.go:130] >     {
	I0717 18:05:11.404501   50854 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 18:05:11.404505   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404510   50854 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 18:05:11.404515   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404519   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404526   50854 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 18:05:11.404535   50854 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 18:05:11.404538   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404542   50854 command_runner.go:130] >       "size": "150779692",
	I0717 18:05:11.404547   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.404551   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.404554   50854 command_runner.go:130] >       },
	I0717 18:05:11.404558   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404564   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404567   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404571   50854 command_runner.go:130] >     },
	I0717 18:05:11.404574   50854 command_runner.go:130] >     {
	I0717 18:05:11.404579   50854 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 18:05:11.404585   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404590   50854 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 18:05:11.404593   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404597   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404604   50854 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 18:05:11.404613   50854 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 18:05:11.404618   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404624   50854 command_runner.go:130] >       "size": "117609954",
	I0717 18:05:11.404630   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.404634   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.404639   50854 command_runner.go:130] >       },
	I0717 18:05:11.404643   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404650   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404654   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404660   50854 command_runner.go:130] >     },
	I0717 18:05:11.404663   50854 command_runner.go:130] >     {
	I0717 18:05:11.404671   50854 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 18:05:11.404677   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404682   50854 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 18:05:11.404686   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404691   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404704   50854 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 18:05:11.404714   50854 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 18:05:11.404720   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404724   50854 command_runner.go:130] >       "size": "112194888",
	I0717 18:05:11.404730   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.404734   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.404741   50854 command_runner.go:130] >       },
	I0717 18:05:11.404745   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404748   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404752   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404755   50854 command_runner.go:130] >     },
	I0717 18:05:11.404758   50854 command_runner.go:130] >     {
	I0717 18:05:11.404764   50854 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 18:05:11.404768   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404774   50854 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 18:05:11.404777   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404781   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404790   50854 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 18:05:11.404799   50854 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 18:05:11.404805   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404809   50854 command_runner.go:130] >       "size": "85953433",
	I0717 18:05:11.404814   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.404819   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404824   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404829   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404835   50854 command_runner.go:130] >     },
	I0717 18:05:11.404838   50854 command_runner.go:130] >     {
	I0717 18:05:11.404845   50854 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 18:05:11.404850   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404856   50854 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 18:05:11.404862   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404866   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404874   50854 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 18:05:11.404883   50854 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 18:05:11.404888   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404892   50854 command_runner.go:130] >       "size": "63051080",
	I0717 18:05:11.404898   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.404902   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.404908   50854 command_runner.go:130] >       },
	I0717 18:05:11.404912   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.404918   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.404921   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.404927   50854 command_runner.go:130] >     },
	I0717 18:05:11.404930   50854 command_runner.go:130] >     {
	I0717 18:05:11.404938   50854 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 18:05:11.404952   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.404957   50854 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 18:05:11.404961   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404964   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.404973   50854 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 18:05:11.404981   50854 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 18:05:11.404986   50854 command_runner.go:130] >       ],
	I0717 18:05:11.404990   50854 command_runner.go:130] >       "size": "750414",
	I0717 18:05:11.404996   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.405000   50854 command_runner.go:130] >         "value": "65535"
	I0717 18:05:11.405006   50854 command_runner.go:130] >       },
	I0717 18:05:11.405010   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.405015   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.405019   50854 command_runner.go:130] >       "pinned": true
	I0717 18:05:11.405025   50854 command_runner.go:130] >     }
	I0717 18:05:11.405028   50854 command_runner.go:130] >   ]
	I0717 18:05:11.405034   50854 command_runner.go:130] > }
	I0717 18:05:11.405189   50854 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:05:11.405199   50854 crio.go:433] Images already preloaded, skipping extraction
	I0717 18:05:11.405246   50854 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:05:11.435589   50854 command_runner.go:130] > {
	I0717 18:05:11.435610   50854 command_runner.go:130] >   "images": [
	I0717 18:05:11.435614   50854 command_runner.go:130] >     {
	I0717 18:05:11.435626   50854 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0717 18:05:11.435630   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.435636   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0717 18:05:11.435639   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435643   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.435672   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0717 18:05:11.435684   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0717 18:05:11.435687   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435692   50854 command_runner.go:130] >       "size": "65908273",
	I0717 18:05:11.435696   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.435700   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.435708   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.435714   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.435718   50854 command_runner.go:130] >     },
	I0717 18:05:11.435721   50854 command_runner.go:130] >     {
	I0717 18:05:11.435727   50854 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0717 18:05:11.435731   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.435736   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0717 18:05:11.435742   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435746   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.435752   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0717 18:05:11.435759   50854 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0717 18:05:11.435763   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435767   50854 command_runner.go:130] >       "size": "87165492",
	I0717 18:05:11.435771   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.435777   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.435781   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.435785   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.435788   50854 command_runner.go:130] >     },
	I0717 18:05:11.435792   50854 command_runner.go:130] >     {
	I0717 18:05:11.435798   50854 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0717 18:05:11.435804   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.435809   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0717 18:05:11.435812   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435816   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.435824   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0717 18:05:11.435834   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0717 18:05:11.435837   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435841   50854 command_runner.go:130] >       "size": "1363676",
	I0717 18:05:11.435845   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.435849   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.435855   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.435859   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.435865   50854 command_runner.go:130] >     },
	I0717 18:05:11.435868   50854 command_runner.go:130] >     {
	I0717 18:05:11.435876   50854 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 18:05:11.435881   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.435887   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 18:05:11.435893   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435897   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.435904   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 18:05:11.435916   50854 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 18:05:11.435921   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435925   50854 command_runner.go:130] >       "size": "31470524",
	I0717 18:05:11.435929   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.435933   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.435937   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.435941   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.435946   50854 command_runner.go:130] >     },
	I0717 18:05:11.435949   50854 command_runner.go:130] >     {
	I0717 18:05:11.435955   50854 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0717 18:05:11.435961   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.435966   50854 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0717 18:05:11.435970   50854 command_runner.go:130] >       ],
	I0717 18:05:11.435973   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.435982   50854 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0717 18:05:11.435990   50854 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0717 18:05:11.435995   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436000   50854 command_runner.go:130] >       "size": "61245718",
	I0717 18:05:11.436005   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.436009   50854 command_runner.go:130] >       "username": "nonroot",
	I0717 18:05:11.436015   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436019   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.436022   50854 command_runner.go:130] >     },
	I0717 18:05:11.436025   50854 command_runner.go:130] >     {
	I0717 18:05:11.436031   50854 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0717 18:05:11.436037   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.436042   50854 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0717 18:05:11.436047   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436051   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.436060   50854 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0717 18:05:11.436067   50854 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0717 18:05:11.436072   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436077   50854 command_runner.go:130] >       "size": "150779692",
	I0717 18:05:11.436080   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.436084   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.436088   50854 command_runner.go:130] >       },
	I0717 18:05:11.436092   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.436096   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436102   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.436105   50854 command_runner.go:130] >     },
	I0717 18:05:11.436110   50854 command_runner.go:130] >     {
	I0717 18:05:11.436118   50854 command_runner.go:130] >       "id": "56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe",
	I0717 18:05:11.436122   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.436127   50854 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.2"
	I0717 18:05:11.436133   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436137   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.436144   50854 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816",
	I0717 18:05:11.436153   50854 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"
	I0717 18:05:11.436156   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436160   50854 command_runner.go:130] >       "size": "117609954",
	I0717 18:05:11.436167   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.436171   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.436175   50854 command_runner.go:130] >       },
	I0717 18:05:11.436179   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.436185   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436189   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.436194   50854 command_runner.go:130] >     },
	I0717 18:05:11.436197   50854 command_runner.go:130] >     {
	I0717 18:05:11.436203   50854 command_runner.go:130] >       "id": "e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974",
	I0717 18:05:11.436209   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.436214   50854 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.2"
	I0717 18:05:11.436220   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436223   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.436236   50854 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e",
	I0717 18:05:11.436246   50854 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"
	I0717 18:05:11.436249   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436253   50854 command_runner.go:130] >       "size": "112194888",
	I0717 18:05:11.436257   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.436261   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.436264   50854 command_runner.go:130] >       },
	I0717 18:05:11.436268   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.436272   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436275   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.436279   50854 command_runner.go:130] >     },
	I0717 18:05:11.436282   50854 command_runner.go:130] >     {
	I0717 18:05:11.436288   50854 command_runner.go:130] >       "id": "53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772",
	I0717 18:05:11.436294   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.436299   50854 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.2"
	I0717 18:05:11.436304   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436308   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.436317   50854 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961",
	I0717 18:05:11.436324   50854 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"
	I0717 18:05:11.436329   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436333   50854 command_runner.go:130] >       "size": "85953433",
	I0717 18:05:11.436338   50854 command_runner.go:130] >       "uid": null,
	I0717 18:05:11.436342   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.436348   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436352   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.436355   50854 command_runner.go:130] >     },
	I0717 18:05:11.436359   50854 command_runner.go:130] >     {
	I0717 18:05:11.436371   50854 command_runner.go:130] >       "id": "7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940",
	I0717 18:05:11.436377   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.436381   50854 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.2"
	I0717 18:05:11.436387   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436391   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.436400   50854 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc",
	I0717 18:05:11.436407   50854 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"
	I0717 18:05:11.436412   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436416   50854 command_runner.go:130] >       "size": "63051080",
	I0717 18:05:11.436420   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.436423   50854 command_runner.go:130] >         "value": "0"
	I0717 18:05:11.436427   50854 command_runner.go:130] >       },
	I0717 18:05:11.436430   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.436434   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436438   50854 command_runner.go:130] >       "pinned": false
	I0717 18:05:11.436442   50854 command_runner.go:130] >     },
	I0717 18:05:11.436447   50854 command_runner.go:130] >     {
	I0717 18:05:11.436453   50854 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 18:05:11.436458   50854 command_runner.go:130] >       "repoTags": [
	I0717 18:05:11.436463   50854 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 18:05:11.436469   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436473   50854 command_runner.go:130] >       "repoDigests": [
	I0717 18:05:11.436479   50854 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 18:05:11.436488   50854 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 18:05:11.436491   50854 command_runner.go:130] >       ],
	I0717 18:05:11.436495   50854 command_runner.go:130] >       "size": "750414",
	I0717 18:05:11.436498   50854 command_runner.go:130] >       "uid": {
	I0717 18:05:11.436502   50854 command_runner.go:130] >         "value": "65535"
	I0717 18:05:11.436505   50854 command_runner.go:130] >       },
	I0717 18:05:11.436509   50854 command_runner.go:130] >       "username": "",
	I0717 18:05:11.436515   50854 command_runner.go:130] >       "spec": null,
	I0717 18:05:11.436518   50854 command_runner.go:130] >       "pinned": true
	I0717 18:05:11.436521   50854 command_runner.go:130] >     }
	I0717 18:05:11.436524   50854 command_runner.go:130] >   ]
	I0717 18:05:11.436530   50854 command_runner.go:130] > }
	I0717 18:05:11.437063   50854 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:05:11.437076   50854 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:05:11.437086   50854 kubeadm.go:934] updating node { 192.168.39.16 8443 v1.30.2 crio true true} ...
	I0717 18:05:11.437177   50854 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-866205 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:multinode-866205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:05:11.437243   50854 ssh_runner.go:195] Run: crio config
	I0717 18:05:11.469210   50854 command_runner.go:130] ! time="2024-07-17 18:05:11.440928153Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0717 18:05:11.475188   50854 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 18:05:11.481102   50854 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 18:05:11.481120   50854 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 18:05:11.481126   50854 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 18:05:11.481129   50854 command_runner.go:130] > #
	I0717 18:05:11.481150   50854 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 18:05:11.481163   50854 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 18:05:11.481172   50854 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 18:05:11.481183   50854 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 18:05:11.481189   50854 command_runner.go:130] > # reload'.
	I0717 18:05:11.481201   50854 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 18:05:11.481213   50854 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 18:05:11.481225   50854 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 18:05:11.481236   50854 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 18:05:11.481242   50854 command_runner.go:130] > [crio]
	I0717 18:05:11.481252   50854 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 18:05:11.481263   50854 command_runner.go:130] > # containers images, in this directory.
	I0717 18:05:11.481268   50854 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0717 18:05:11.481275   50854 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 18:05:11.481283   50854 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0717 18:05:11.481294   50854 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0717 18:05:11.481303   50854 command_runner.go:130] > # imagestore = ""
	I0717 18:05:11.481320   50854 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 18:05:11.481333   50854 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 18:05:11.481342   50854 command_runner.go:130] > storage_driver = "overlay"
	I0717 18:05:11.481353   50854 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 18:05:11.481365   50854 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 18:05:11.481376   50854 command_runner.go:130] > storage_option = [
	I0717 18:05:11.481387   50854 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0717 18:05:11.481395   50854 command_runner.go:130] > ]
	I0717 18:05:11.481406   50854 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 18:05:11.481419   50854 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 18:05:11.481428   50854 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 18:05:11.481435   50854 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 18:05:11.481444   50854 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 18:05:11.481450   50854 command_runner.go:130] > # always happen on a node reboot
	I0717 18:05:11.481455   50854 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 18:05:11.481466   50854 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 18:05:11.481474   50854 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 18:05:11.481480   50854 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 18:05:11.481487   50854 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0717 18:05:11.481494   50854 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 18:05:11.481504   50854 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 18:05:11.481510   50854 command_runner.go:130] > # internal_wipe = true
	I0717 18:05:11.481517   50854 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0717 18:05:11.481524   50854 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0717 18:05:11.481528   50854 command_runner.go:130] > # internal_repair = false
	I0717 18:05:11.481536   50854 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 18:05:11.481543   50854 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 18:05:11.481548   50854 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 18:05:11.481555   50854 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 18:05:11.481561   50854 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 18:05:11.481569   50854 command_runner.go:130] > [crio.api]
	I0717 18:05:11.481576   50854 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 18:05:11.481583   50854 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 18:05:11.481588   50854 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 18:05:11.481595   50854 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 18:05:11.481601   50854 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 18:05:11.481608   50854 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 18:05:11.481612   50854 command_runner.go:130] > # stream_port = "0"
	I0717 18:05:11.481620   50854 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 18:05:11.481626   50854 command_runner.go:130] > # stream_enable_tls = false
	I0717 18:05:11.481632   50854 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 18:05:11.481638   50854 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 18:05:11.481645   50854 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 18:05:11.481653   50854 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 18:05:11.481657   50854 command_runner.go:130] > # minutes.
	I0717 18:05:11.481661   50854 command_runner.go:130] > # stream_tls_cert = ""
	I0717 18:05:11.481669   50854 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 18:05:11.481675   50854 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 18:05:11.481682   50854 command_runner.go:130] > # stream_tls_key = ""
	I0717 18:05:11.481687   50854 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 18:05:11.481695   50854 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 18:05:11.481708   50854 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 18:05:11.481714   50854 command_runner.go:130] > # stream_tls_ca = ""
	I0717 18:05:11.481721   50854 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 18:05:11.481727   50854 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0717 18:05:11.481735   50854 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0717 18:05:11.481741   50854 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0717 18:05:11.481747   50854 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 18:05:11.481755   50854 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 18:05:11.481759   50854 command_runner.go:130] > [crio.runtime]
	I0717 18:05:11.481765   50854 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 18:05:11.481772   50854 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 18:05:11.481779   50854 command_runner.go:130] > # "nofile=1024:2048"
	I0717 18:05:11.481785   50854 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 18:05:11.481791   50854 command_runner.go:130] > # default_ulimits = [
	I0717 18:05:11.481794   50854 command_runner.go:130] > # ]
	I0717 18:05:11.481803   50854 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 18:05:11.481808   50854 command_runner.go:130] > # no_pivot = false
	I0717 18:05:11.481814   50854 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 18:05:11.481821   50854 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 18:05:11.481828   50854 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 18:05:11.481833   50854 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 18:05:11.481840   50854 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 18:05:11.481846   50854 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 18:05:11.481853   50854 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0717 18:05:11.481857   50854 command_runner.go:130] > # Cgroup setting for conmon
	I0717 18:05:11.481865   50854 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 18:05:11.481876   50854 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 18:05:11.481885   50854 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 18:05:11.481892   50854 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 18:05:11.481898   50854 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 18:05:11.481904   50854 command_runner.go:130] > conmon_env = [
	I0717 18:05:11.481909   50854 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 18:05:11.481914   50854 command_runner.go:130] > ]
	I0717 18:05:11.481919   50854 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 18:05:11.481926   50854 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 18:05:11.481932   50854 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 18:05:11.481937   50854 command_runner.go:130] > # default_env = [
	I0717 18:05:11.481941   50854 command_runner.go:130] > # ]
	I0717 18:05:11.481948   50854 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 18:05:11.481957   50854 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0717 18:05:11.481964   50854 command_runner.go:130] > # selinux = false
	I0717 18:05:11.481970   50854 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 18:05:11.481978   50854 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 18:05:11.481985   50854 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 18:05:11.481989   50854 command_runner.go:130] > # seccomp_profile = ""
	I0717 18:05:11.481996   50854 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 18:05:11.482002   50854 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 18:05:11.482009   50854 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 18:05:11.482014   50854 command_runner.go:130] > # which might increase security.
	I0717 18:05:11.482019   50854 command_runner.go:130] > # This option is currently deprecated,
	I0717 18:05:11.482026   50854 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0717 18:05:11.482034   50854 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0717 18:05:11.482040   50854 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 18:05:11.482048   50854 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 18:05:11.482056   50854 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 18:05:11.482062   50854 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 18:05:11.482068   50854 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:05:11.482072   50854 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 18:05:11.482078   50854 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 18:05:11.482084   50854 command_runner.go:130] > # the cgroup blockio controller.
	I0717 18:05:11.482088   50854 command_runner.go:130] > # blockio_config_file = ""
	I0717 18:05:11.482096   50854 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0717 18:05:11.482102   50854 command_runner.go:130] > # blockio parameters.
	I0717 18:05:11.482106   50854 command_runner.go:130] > # blockio_reload = false
	I0717 18:05:11.482114   50854 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 18:05:11.482120   50854 command_runner.go:130] > # irqbalance daemon.
	I0717 18:05:11.482125   50854 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 18:05:11.482132   50854 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0717 18:05:11.482140   50854 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0717 18:05:11.482149   50854 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0717 18:05:11.482157   50854 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0717 18:05:11.482165   50854 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 18:05:11.482170   50854 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:05:11.482175   50854 command_runner.go:130] > # rdt_config_file = ""
	I0717 18:05:11.482180   50854 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 18:05:11.482186   50854 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 18:05:11.482202   50854 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 18:05:11.482209   50854 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 18:05:11.482215   50854 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 18:05:11.482220   50854 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 18:05:11.482226   50854 command_runner.go:130] > # will be added.
	I0717 18:05:11.482230   50854 command_runner.go:130] > # default_capabilities = [
	I0717 18:05:11.482235   50854 command_runner.go:130] > # 	"CHOWN",
	I0717 18:05:11.482239   50854 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 18:05:11.482245   50854 command_runner.go:130] > # 	"FSETID",
	I0717 18:05:11.482249   50854 command_runner.go:130] > # 	"FOWNER",
	I0717 18:05:11.482255   50854 command_runner.go:130] > # 	"SETGID",
	I0717 18:05:11.482259   50854 command_runner.go:130] > # 	"SETUID",
	I0717 18:05:11.482264   50854 command_runner.go:130] > # 	"SETPCAP",
	I0717 18:05:11.482268   50854 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 18:05:11.482275   50854 command_runner.go:130] > # 	"KILL",
	I0717 18:05:11.482278   50854 command_runner.go:130] > # ]
	I0717 18:05:11.482291   50854 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 18:05:11.482311   50854 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 18:05:11.482325   50854 command_runner.go:130] > # add_inheritable_capabilities = false
	I0717 18:05:11.482337   50854 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 18:05:11.482349   50854 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 18:05:11.482358   50854 command_runner.go:130] > default_sysctls = [
	I0717 18:05:11.482367   50854 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0717 18:05:11.482371   50854 command_runner.go:130] > ]
	I0717 18:05:11.482378   50854 command_runner.go:130] > # List of devices on the host that a
	I0717 18:05:11.482384   50854 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 18:05:11.482390   50854 command_runner.go:130] > # allowed_devices = [
	I0717 18:05:11.482394   50854 command_runner.go:130] > # 	"/dev/fuse",
	I0717 18:05:11.482399   50854 command_runner.go:130] > # ]
	I0717 18:05:11.482404   50854 command_runner.go:130] > # List of additional devices. specified as
	I0717 18:05:11.482413   50854 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 18:05:11.482421   50854 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 18:05:11.482428   50854 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 18:05:11.482433   50854 command_runner.go:130] > # additional_devices = [
	I0717 18:05:11.482438   50854 command_runner.go:130] > # ]
	I0717 18:05:11.482442   50854 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 18:05:11.482448   50854 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 18:05:11.482452   50854 command_runner.go:130] > # 	"/etc/cdi",
	I0717 18:05:11.482458   50854 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 18:05:11.482461   50854 command_runner.go:130] > # ]
	I0717 18:05:11.482467   50854 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 18:05:11.482475   50854 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 18:05:11.482480   50854 command_runner.go:130] > # Defaults to false.
	I0717 18:05:11.482484   50854 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 18:05:11.482493   50854 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 18:05:11.482501   50854 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 18:05:11.482505   50854 command_runner.go:130] > # hooks_dir = [
	I0717 18:05:11.482513   50854 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 18:05:11.482519   50854 command_runner.go:130] > # ]
	I0717 18:05:11.482525   50854 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 18:05:11.482533   50854 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 18:05:11.482540   50854 command_runner.go:130] > # its default mounts from the following two files:
	I0717 18:05:11.482543   50854 command_runner.go:130] > #
	I0717 18:05:11.482552   50854 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 18:05:11.482560   50854 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 18:05:11.482569   50854 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 18:05:11.482575   50854 command_runner.go:130] > #
	I0717 18:05:11.482581   50854 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 18:05:11.482589   50854 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 18:05:11.482597   50854 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 18:05:11.482604   50854 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 18:05:11.482608   50854 command_runner.go:130] > #
	I0717 18:05:11.482614   50854 command_runner.go:130] > # default_mounts_file = ""
	I0717 18:05:11.482620   50854 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 18:05:11.482628   50854 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 18:05:11.482634   50854 command_runner.go:130] > pids_limit = 1024
	I0717 18:05:11.482640   50854 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 18:05:11.482647   50854 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 18:05:11.482656   50854 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 18:05:11.482665   50854 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 18:05:11.482672   50854 command_runner.go:130] > # log_size_max = -1
	I0717 18:05:11.482678   50854 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0717 18:05:11.482685   50854 command_runner.go:130] > # log_to_journald = false
	I0717 18:05:11.482691   50854 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 18:05:11.482697   50854 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 18:05:11.482702   50854 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 18:05:11.482709   50854 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 18:05:11.482714   50854 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 18:05:11.482720   50854 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 18:05:11.482725   50854 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 18:05:11.482731   50854 command_runner.go:130] > # read_only = false
	I0717 18:05:11.482737   50854 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 18:05:11.482744   50854 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 18:05:11.482751   50854 command_runner.go:130] > # live configuration reload.
	I0717 18:05:11.482755   50854 command_runner.go:130] > # log_level = "info"
	I0717 18:05:11.482763   50854 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 18:05:11.482769   50854 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:05:11.482775   50854 command_runner.go:130] > # log_filter = ""
	I0717 18:05:11.482781   50854 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 18:05:11.482790   50854 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 18:05:11.482794   50854 command_runner.go:130] > # separated by comma.
	I0717 18:05:11.482803   50854 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:05:11.482809   50854 command_runner.go:130] > # uid_mappings = ""
	I0717 18:05:11.482815   50854 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 18:05:11.482822   50854 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 18:05:11.482829   50854 command_runner.go:130] > # separated by comma.
	I0717 18:05:11.482837   50854 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:05:11.482843   50854 command_runner.go:130] > # gid_mappings = ""
	I0717 18:05:11.482849   50854 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 18:05:11.482857   50854 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 18:05:11.482865   50854 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 18:05:11.482873   50854 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:05:11.482879   50854 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 18:05:11.482884   50854 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 18:05:11.482892   50854 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 18:05:11.482900   50854 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 18:05:11.482910   50854 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0717 18:05:11.482916   50854 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 18:05:11.482921   50854 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 18:05:11.482929   50854 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 18:05:11.482937   50854 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 18:05:11.482942   50854 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 18:05:11.482948   50854 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 18:05:11.482955   50854 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 18:05:11.482962   50854 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 18:05:11.482971   50854 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 18:05:11.482977   50854 command_runner.go:130] > drop_infra_ctr = false
	I0717 18:05:11.482983   50854 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 18:05:11.482990   50854 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 18:05:11.482997   50854 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 18:05:11.483003   50854 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 18:05:11.483010   50854 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0717 18:05:11.483022   50854 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0717 18:05:11.483030   50854 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0717 18:05:11.483037   50854 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0717 18:05:11.483042   50854 command_runner.go:130] > # shared_cpuset = ""
	I0717 18:05:11.483047   50854 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 18:05:11.483052   50854 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 18:05:11.483058   50854 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 18:05:11.483064   50854 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 18:05:11.483070   50854 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0717 18:05:11.483075   50854 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0717 18:05:11.483083   50854 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0717 18:05:11.483090   50854 command_runner.go:130] > # enable_criu_support = false
	I0717 18:05:11.483095   50854 command_runner.go:130] > # Enable/disable the generation of the container,
	I0717 18:05:11.483102   50854 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0717 18:05:11.483108   50854 command_runner.go:130] > # enable_pod_events = false
	I0717 18:05:11.483115   50854 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 18:05:11.483123   50854 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 18:05:11.483130   50854 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0717 18:05:11.483133   50854 command_runner.go:130] > # default_runtime = "runc"
	I0717 18:05:11.483141   50854 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 18:05:11.483147   50854 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 18:05:11.483157   50854 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0717 18:05:11.483164   50854 command_runner.go:130] > # creation as a file is not desired either.
	I0717 18:05:11.483172   50854 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 18:05:11.483179   50854 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 18:05:11.483185   50854 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 18:05:11.483192   50854 command_runner.go:130] > # ]
	I0717 18:05:11.483202   50854 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 18:05:11.483214   50854 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 18:05:11.483226   50854 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0717 18:05:11.483236   50854 command_runner.go:130] > # Each entry in the table should follow the format:
	I0717 18:05:11.483243   50854 command_runner.go:130] > #
	I0717 18:05:11.483250   50854 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0717 18:05:11.483261   50854 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0717 18:05:11.483286   50854 command_runner.go:130] > # runtime_type = "oci"
	I0717 18:05:11.483296   50854 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0717 18:05:11.483304   50854 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0717 18:05:11.483314   50854 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0717 18:05:11.483327   50854 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0717 18:05:11.483336   50854 command_runner.go:130] > # monitor_env = []
	I0717 18:05:11.483346   50854 command_runner.go:130] > # privileged_without_host_devices = false
	I0717 18:05:11.483356   50854 command_runner.go:130] > # allowed_annotations = []
	I0717 18:05:11.483366   50854 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0717 18:05:11.483372   50854 command_runner.go:130] > # Where:
	I0717 18:05:11.483378   50854 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0717 18:05:11.483386   50854 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0717 18:05:11.483392   50854 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 18:05:11.483400   50854 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 18:05:11.483404   50854 command_runner.go:130] > #   in $PATH.
	I0717 18:05:11.483411   50854 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0717 18:05:11.483418   50854 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 18:05:11.483424   50854 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0717 18:05:11.483430   50854 command_runner.go:130] > #   state.
	I0717 18:05:11.483436   50854 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 18:05:11.483444   50854 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 18:05:11.483453   50854 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 18:05:11.483460   50854 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 18:05:11.483468   50854 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 18:05:11.483474   50854 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 18:05:11.483480   50854 command_runner.go:130] > #   The currently recognized values are:
	I0717 18:05:11.483486   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 18:05:11.483496   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 18:05:11.483504   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 18:05:11.483510   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 18:05:11.483519   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 18:05:11.483527   50854 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 18:05:11.483534   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0717 18:05:11.483541   50854 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0717 18:05:11.483549   50854 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 18:05:11.483555   50854 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0717 18:05:11.483560   50854 command_runner.go:130] > #   deprecated option "conmon".
	I0717 18:05:11.483569   50854 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0717 18:05:11.483576   50854 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0717 18:05:11.483583   50854 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0717 18:05:11.483589   50854 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 18:05:11.483595   50854 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0717 18:05:11.483603   50854 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0717 18:05:11.483609   50854 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0717 18:05:11.483616   50854 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0717 18:05:11.483619   50854 command_runner.go:130] > #
	I0717 18:05:11.483626   50854 command_runner.go:130] > # Using the seccomp notifier feature:
	I0717 18:05:11.483629   50854 command_runner.go:130] > #
	I0717 18:05:11.483635   50854 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0717 18:05:11.483643   50854 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0717 18:05:11.483646   50854 command_runner.go:130] > #
	I0717 18:05:11.483654   50854 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0717 18:05:11.483660   50854 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0717 18:05:11.483665   50854 command_runner.go:130] > #
	I0717 18:05:11.483671   50854 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0717 18:05:11.483676   50854 command_runner.go:130] > # feature.
	I0717 18:05:11.483679   50854 command_runner.go:130] > #
	I0717 18:05:11.483686   50854 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0717 18:05:11.483694   50854 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0717 18:05:11.483702   50854 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0717 18:05:11.483710   50854 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0717 18:05:11.483718   50854 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0717 18:05:11.483721   50854 command_runner.go:130] > #
	I0717 18:05:11.483729   50854 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0717 18:05:11.483734   50854 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0717 18:05:11.483740   50854 command_runner.go:130] > #
	I0717 18:05:11.483745   50854 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0717 18:05:11.483752   50854 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0717 18:05:11.483755   50854 command_runner.go:130] > #
	I0717 18:05:11.483763   50854 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0717 18:05:11.483769   50854 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0717 18:05:11.483775   50854 command_runner.go:130] > # limitation.
	I0717 18:05:11.483780   50854 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 18:05:11.483787   50854 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0717 18:05:11.483791   50854 command_runner.go:130] > runtime_type = "oci"
	I0717 18:05:11.483795   50854 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 18:05:11.483799   50854 command_runner.go:130] > runtime_config_path = ""
	I0717 18:05:11.483806   50854 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0717 18:05:11.483810   50854 command_runner.go:130] > monitor_cgroup = "pod"
	I0717 18:05:11.483814   50854 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 18:05:11.483818   50854 command_runner.go:130] > monitor_env = [
	I0717 18:05:11.483824   50854 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0717 18:05:11.483830   50854 command_runner.go:130] > ]
	I0717 18:05:11.483834   50854 command_runner.go:130] > privileged_without_host_devices = false
	I0717 18:05:11.483847   50854 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 18:05:11.483858   50854 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 18:05:11.483871   50854 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 18:05:11.483883   50854 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 18:05:11.483897   50854 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 18:05:11.483907   50854 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 18:05:11.483915   50854 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 18:05:11.483924   50854 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 18:05:11.483930   50854 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 18:05:11.483937   50854 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 18:05:11.483940   50854 command_runner.go:130] > # Example:
	I0717 18:05:11.483944   50854 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 18:05:11.483948   50854 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 18:05:11.483953   50854 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 18:05:11.483957   50854 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 18:05:11.483960   50854 command_runner.go:130] > # cpuset = 0
	I0717 18:05:11.483964   50854 command_runner.go:130] > # cpushares = "0-1"
	I0717 18:05:11.483967   50854 command_runner.go:130] > # Where:
	I0717 18:05:11.483972   50854 command_runner.go:130] > # The workload name is workload-type.
	I0717 18:05:11.483978   50854 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 18:05:11.483983   50854 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 18:05:11.483987   50854 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 18:05:11.483995   50854 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 18:05:11.484008   50854 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 18:05:11.484013   50854 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0717 18:05:11.484021   50854 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0717 18:05:11.484028   50854 command_runner.go:130] > # Default value is set to true
	I0717 18:05:11.484031   50854 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0717 18:05:11.484039   50854 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0717 18:05:11.484045   50854 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0717 18:05:11.484050   50854 command_runner.go:130] > # Default value is set to 'false'
	I0717 18:05:11.484056   50854 command_runner.go:130] > # disable_hostport_mapping = false
	I0717 18:05:11.484062   50854 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 18:05:11.484067   50854 command_runner.go:130] > #
	I0717 18:05:11.484073   50854 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 18:05:11.484079   50854 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 18:05:11.484088   50854 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 18:05:11.484096   50854 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 18:05:11.484104   50854 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 18:05:11.484109   50854 command_runner.go:130] > [crio.image]
	I0717 18:05:11.484115   50854 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 18:05:11.484121   50854 command_runner.go:130] > # default_transport = "docker://"
	I0717 18:05:11.484127   50854 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 18:05:11.484135   50854 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 18:05:11.484139   50854 command_runner.go:130] > # global_auth_file = ""
	I0717 18:05:11.484144   50854 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 18:05:11.484151   50854 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:05:11.484155   50854 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0717 18:05:11.484163   50854 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 18:05:11.484170   50854 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 18:05:11.484176   50854 command_runner.go:130] > # This option supports live configuration reload.
	I0717 18:05:11.484181   50854 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 18:05:11.484188   50854 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 18:05:11.484193   50854 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 18:05:11.484201   50854 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 18:05:11.484207   50854 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 18:05:11.484213   50854 command_runner.go:130] > # pause_command = "/pause"
	I0717 18:05:11.484219   50854 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0717 18:05:11.484226   50854 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0717 18:05:11.484235   50854 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0717 18:05:11.484245   50854 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0717 18:05:11.484253   50854 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0717 18:05:11.484261   50854 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0717 18:05:11.484267   50854 command_runner.go:130] > # pinned_images = [
	I0717 18:05:11.484270   50854 command_runner.go:130] > # ]
	I0717 18:05:11.484278   50854 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 18:05:11.484289   50854 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 18:05:11.484301   50854 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 18:05:11.484313   50854 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 18:05:11.484328   50854 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 18:05:11.484337   50854 command_runner.go:130] > # signature_policy = ""
	I0717 18:05:11.484347   50854 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0717 18:05:11.484359   50854 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0717 18:05:11.484370   50854 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0717 18:05:11.484381   50854 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0717 18:05:11.484391   50854 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0717 18:05:11.484400   50854 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0717 18:05:11.484411   50854 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 18:05:11.484422   50854 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 18:05:11.484430   50854 command_runner.go:130] > # changing them here.
	I0717 18:05:11.484439   50854 command_runner.go:130] > # insecure_registries = [
	I0717 18:05:11.484446   50854 command_runner.go:130] > # ]
	I0717 18:05:11.484455   50854 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 18:05:11.484465   50854 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 18:05:11.484474   50854 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 18:05:11.484485   50854 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 18:05:11.484495   50854 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 18:05:11.484507   50854 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 18:05:11.484513   50854 command_runner.go:130] > # CNI plugins.
	I0717 18:05:11.484517   50854 command_runner.go:130] > [crio.network]
	I0717 18:05:11.484523   50854 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 18:05:11.484530   50854 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 18:05:11.484537   50854 command_runner.go:130] > # cni_default_network = ""
	I0717 18:05:11.484543   50854 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 18:05:11.484549   50854 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 18:05:11.484556   50854 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 18:05:11.484562   50854 command_runner.go:130] > # plugin_dirs = [
	I0717 18:05:11.484567   50854 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 18:05:11.484573   50854 command_runner.go:130] > # ]
	I0717 18:05:11.484582   50854 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 18:05:11.484591   50854 command_runner.go:130] > [crio.metrics]
	I0717 18:05:11.484601   50854 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 18:05:11.484610   50854 command_runner.go:130] > enable_metrics = true
	I0717 18:05:11.484619   50854 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 18:05:11.484630   50854 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 18:05:11.484642   50854 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 18:05:11.484655   50854 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 18:05:11.484664   50854 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 18:05:11.484670   50854 command_runner.go:130] > # metrics_collectors = [
	I0717 18:05:11.484674   50854 command_runner.go:130] > # 	"operations",
	I0717 18:05:11.484682   50854 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 18:05:11.484688   50854 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 18:05:11.484693   50854 command_runner.go:130] > # 	"operations_errors",
	I0717 18:05:11.484699   50854 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 18:05:11.484703   50854 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 18:05:11.484710   50854 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 18:05:11.484713   50854 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 18:05:11.484719   50854 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 18:05:11.484724   50854 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 18:05:11.484730   50854 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 18:05:11.484734   50854 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0717 18:05:11.484740   50854 command_runner.go:130] > # 	"containers_oom_total",
	I0717 18:05:11.484744   50854 command_runner.go:130] > # 	"containers_oom",
	I0717 18:05:11.484749   50854 command_runner.go:130] > # 	"processes_defunct",
	I0717 18:05:11.484753   50854 command_runner.go:130] > # 	"operations_total",
	I0717 18:05:11.484760   50854 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 18:05:11.484764   50854 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 18:05:11.484770   50854 command_runner.go:130] > # 	"operations_errors_total",
	I0717 18:05:11.484774   50854 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 18:05:11.484781   50854 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 18:05:11.484786   50854 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 18:05:11.484793   50854 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 18:05:11.484797   50854 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 18:05:11.484803   50854 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 18:05:11.484807   50854 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0717 18:05:11.484813   50854 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0717 18:05:11.484817   50854 command_runner.go:130] > # ]
	I0717 18:05:11.484824   50854 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 18:05:11.484829   50854 command_runner.go:130] > # metrics_port = 9090
	I0717 18:05:11.484833   50854 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 18:05:11.484839   50854 command_runner.go:130] > # metrics_socket = ""
	I0717 18:05:11.484844   50854 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 18:05:11.484851   50854 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 18:05:11.484860   50854 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 18:05:11.484865   50854 command_runner.go:130] > # certificate on any modification event.
	I0717 18:05:11.484871   50854 command_runner.go:130] > # metrics_cert = ""
	I0717 18:05:11.484876   50854 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 18:05:11.484882   50854 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 18:05:11.484886   50854 command_runner.go:130] > # metrics_key = ""
	I0717 18:05:11.484894   50854 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 18:05:11.484898   50854 command_runner.go:130] > [crio.tracing]
	I0717 18:05:11.484905   50854 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 18:05:11.484912   50854 command_runner.go:130] > # enable_tracing = false
	I0717 18:05:11.484917   50854 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 18:05:11.484924   50854 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 18:05:11.484931   50854 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0717 18:05:11.484937   50854 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 18:05:11.484953   50854 command_runner.go:130] > # CRI-O NRI configuration.
	I0717 18:05:11.484962   50854 command_runner.go:130] > [crio.nri]
	I0717 18:05:11.484969   50854 command_runner.go:130] > # Globally enable or disable NRI.
	I0717 18:05:11.484976   50854 command_runner.go:130] > # enable_nri = false
	I0717 18:05:11.484980   50854 command_runner.go:130] > # NRI socket to listen on.
	I0717 18:05:11.484987   50854 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0717 18:05:11.484991   50854 command_runner.go:130] > # NRI plugin directory to use.
	I0717 18:05:11.484996   50854 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0717 18:05:11.485002   50854 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0717 18:05:11.485007   50854 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0717 18:05:11.485015   50854 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0717 18:05:11.485020   50854 command_runner.go:130] > # nri_disable_connections = false
	I0717 18:05:11.485027   50854 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0717 18:05:11.485032   50854 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0717 18:05:11.485039   50854 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0717 18:05:11.485043   50854 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0717 18:05:11.485051   50854 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 18:05:11.485056   50854 command_runner.go:130] > [crio.stats]
	I0717 18:05:11.485062   50854 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 18:05:11.485069   50854 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 18:05:11.485073   50854 command_runner.go:130] > # stats_collection_period = 0
	I0717 18:05:11.485164   50854 cni.go:84] Creating CNI manager for ""
	I0717 18:05:11.485173   50854 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0717 18:05:11.485180   50854 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:05:11.485203   50854 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.16 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-866205 NodeName:multinode-866205 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:05:11.485349   50854 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-866205"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:05:11.485414   50854 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:05:11.495073   50854 command_runner.go:130] > kubeadm
	I0717 18:05:11.495089   50854 command_runner.go:130] > kubectl
	I0717 18:05:11.495093   50854 command_runner.go:130] > kubelet
	I0717 18:05:11.495119   50854 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:05:11.495174   50854 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:05:11.504240   50854 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0717 18:05:11.519535   50854 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:05:11.534162   50854 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0717 18:05:11.548480   50854 ssh_runner.go:195] Run: grep 192.168.39.16	control-plane.minikube.internal$ /etc/hosts
	I0717 18:05:11.551896   50854 command_runner.go:130] > 192.168.39.16	control-plane.minikube.internal
	I0717 18:05:11.551965   50854 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:05:11.689285   50854 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:05:11.703745   50854 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205 for IP: 192.168.39.16
	I0717 18:05:11.703772   50854 certs.go:194] generating shared ca certs ...
	I0717 18:05:11.703802   50854 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:05:11.703978   50854 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:05:11.704024   50854 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:05:11.704035   50854 certs.go:256] generating profile certs ...
	I0717 18:05:11.704137   50854 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/client.key
	I0717 18:05:11.704193   50854 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/apiserver.key.cece838c
	I0717 18:05:11.704238   50854 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/proxy-client.key
	I0717 18:05:11.704250   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:05:11.704265   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:05:11.704280   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:05:11.704297   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:05:11.704317   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:05:11.704373   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:05:11.704405   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:05:11.704421   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:05:11.704486   50854 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:05:11.704517   50854 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:05:11.704528   50854 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:05:11.704568   50854 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:05:11.704594   50854 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:05:11.704618   50854 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:05:11.704658   50854 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:05:11.704689   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem -> /usr/share/ca-certificates/21577.pem
	I0717 18:05:11.704704   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> /usr/share/ca-certificates/215772.pem
	I0717 18:05:11.704718   50854 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:05:11.705308   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:05:11.727514   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:05:11.748923   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:05:11.770128   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:05:11.791417   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:05:11.812372   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 18:05:11.833667   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:05:11.854675   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/multinode-866205/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:05:11.875559   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:05:11.897397   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:05:11.918246   50854 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:05:11.939318   50854 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:05:11.953727   50854 ssh_runner.go:195] Run: openssl version
	I0717 18:05:11.959057   50854 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0717 18:05:11.959111   50854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:05:11.968306   50854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:05:11.972164   50854 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:05:11.972215   50854 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:05:11.972244   50854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:05:11.977501   50854 command_runner.go:130] > 3ec20f2e
	I0717 18:05:11.977550   50854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:05:11.985560   50854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:05:11.994905   50854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:05:11.998747   50854 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:05:11.998772   50854 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:05:11.998816   50854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:05:12.003643   50854 command_runner.go:130] > b5213941
	I0717 18:05:12.003771   50854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:05:12.011939   50854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:05:12.021285   50854 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:05:12.024993   50854 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:05:12.025121   50854 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:05:12.025154   50854 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:05:12.029993   50854 command_runner.go:130] > 51391683
	I0717 18:05:12.030043   50854 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:05:12.038299   50854 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:05:12.042372   50854 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:05:12.042388   50854 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0717 18:05:12.042394   50854 command_runner.go:130] > Device: 253,1	Inode: 5245461     Links: 1
	I0717 18:05:12.042400   50854 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 18:05:12.042406   50854 command_runner.go:130] > Access: 2024-07-17 17:58:21.631940860 +0000
	I0717 18:05:12.042411   50854 command_runner.go:130] > Modify: 2024-07-17 17:58:21.631940860 +0000
	I0717 18:05:12.042415   50854 command_runner.go:130] > Change: 2024-07-17 17:58:21.631940860 +0000
	I0717 18:05:12.042419   50854 command_runner.go:130] >  Birth: 2024-07-17 17:58:21.631940860 +0000
	I0717 18:05:12.042561   50854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:05:12.047564   50854 command_runner.go:130] > Certificate will not expire
	I0717 18:05:12.047608   50854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:05:12.052874   50854 command_runner.go:130] > Certificate will not expire
	I0717 18:05:12.053010   50854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:05:12.079190   50854 command_runner.go:130] > Certificate will not expire
	I0717 18:05:12.079261   50854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:05:12.084502   50854 command_runner.go:130] > Certificate will not expire
	I0717 18:05:12.084553   50854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:05:12.089992   50854 command_runner.go:130] > Certificate will not expire
	I0717 18:05:12.090053   50854 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:05:12.095140   50854 command_runner.go:130] > Certificate will not expire
	I0717 18:05:12.095212   50854 kubeadm.go:392] StartCluster: {Name:multinode-866205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
2 ClusterName:multinode-866205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.16 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.113 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.78 Port:0 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:05:12.095353   50854 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:05:12.095416   50854 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:05:12.127180   50854 command_runner.go:130] > 4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08
	I0717 18:05:12.127205   50854 command_runner.go:130] > 1815402f04f919e11c3a96aaf379eccdbfe300319fe17e0acba5022a4aa426f7
	I0717 18:05:12.127211   50854 command_runner.go:130] > 6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef
	I0717 18:05:12.127217   50854 command_runner.go:130] > 53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106
	I0717 18:05:12.127223   50854 command_runner.go:130] > 390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47
	I0717 18:05:12.127228   50854 command_runner.go:130] > bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a
	I0717 18:05:12.127238   50854 command_runner.go:130] > 5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f
	I0717 18:05:12.127245   50854 command_runner.go:130] > 768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d
	I0717 18:05:12.128504   50854 cri.go:89] found id: "4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08"
	I0717 18:05:12.128520   50854 cri.go:89] found id: "1815402f04f919e11c3a96aaf379eccdbfe300319fe17e0acba5022a4aa426f7"
	I0717 18:05:12.128524   50854 cri.go:89] found id: "6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef"
	I0717 18:05:12.128528   50854 cri.go:89] found id: "53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106"
	I0717 18:05:12.128530   50854 cri.go:89] found id: "390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47"
	I0717 18:05:12.128534   50854 cri.go:89] found id: "bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a"
	I0717 18:05:12.128537   50854 cri.go:89] found id: "5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f"
	I0717 18:05:12.128539   50854 cri.go:89] found id: "768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d"
	I0717 18:05:12.128542   50854 cri.go:89] found id: ""
	I0717 18:05:12.128589   50854 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.418469830Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0e753b2-00a7-46e0-9983-269bdd7fb1d9 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.420067294Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7357282-ae35-4740-8094-0173266b583f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.420534820Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721239758420511024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7357282-ae35-4740-8094-0173266b583f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.423351254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d060d51c-1de6-4936-a2a6-15c9d3bed1d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.423411291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d060d51c-1de6-4936-a2a6-15c9d3bed1d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.423829919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4715a46e2baec137f37988949e5f783704acfefe3e92a3d4a0aa39dd54c648ca,PodSandboxId:8a810590d3716f6035bdd86963ffd02d2b98a3bec1491cd7afce399f2d77c915,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721239552802655844,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59bab707068cccd4ff807dfb4cbe1c9164ce49d80ecdb3334e035447c895132,PodSandboxId:43ad8250f0304816f7aca4c6eb7b33d619d35eceeb0728f21fcce0eeb1ed9f27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721239519346970397,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf7a6123dfc52831b70a4b7ab26667dc5dbfd3b6224dced4120c793504007930,PodSandboxId:3334be3daed4423efb4c5526619492349698fcf76ec835cf716d61803c2468e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239519337385540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126acaa753011c9c50ef72aaf9414bcf13f76b5f32b145885055dc58284112f0,PodSandboxId:130d890a0a2f25a870b2ad00d6a69f31bd2465561843ddc5f4561b6c17ffb3e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239519162543359,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},An
notations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a600dc9dc4cbfe810d7bfdddf4001a2a6835b4f561ddee8ec89a3b97c0781e7e,PodSandboxId:f5a233028b6dbe79a4f81ad15478623db2e0b0fc0266375c0785bfc00d0fe23e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239519092702782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.ku
bernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9b4a09a89d9644c160bc53340de6d77de564f71b2afa786f3a582fdabfda56,PodSandboxId:35a360b43087033a085d7395fb453963cb00fd9958c9e64e1ecac26da1336029,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239514321889051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f771f4846c09fc27b0dda60952111f83f446d6df7eaf2a8998a5a20c2489aa45,PodSandboxId:8d8ee5895be8cdf42d8a5d3315f4fd0e2d6953134f1db2151287c953b2f775fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239514267128470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e8e3edcb645f3c0ff2b2960f9eec7a22f72853f96d97b2a2f1a60774be4ecd,PodSandboxId:eddf0dc388923d38de15f221cada729d343eedc7bb7e6263b323c4494e610d2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239514276879924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85689b761c08356d8c72ddbb6741d7811846bc176e5620e1f16292d2405380d2,PodSandboxId:0d667fdc790e4295acfcd1853c0d6d179a94cbb3a2a6d2c1b8bb2fdf763ac335,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239514193889527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:036b3403e707124943825446aceca9e338fc1ad99d10a5fcb05ee5517fb831aa,PodSandboxId:f8e0d93c1dfef807c220fb730ad6a45f781d414dc379d0c5b88920d16ededd46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721239192830223235,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08,PodSandboxId:64f0544b273e837cc65e06e2daef1c2dff00a450bd15743d29d37db7f39428ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721239140324421303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1815402f04f919e11c3a96aaf379eccdbfe300319fe17e0acba5022a4aa426f7,PodSandboxId:4340bcf9e6fc2a296e9f277f39aebcf6b017024c2f0ccf7c4189a18216254786,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721239140265764506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},Annotations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef,PodSandboxId:af144f702171c92d505f826276bacfb71330149c01daa5fc2e1d2c2e2dac8889,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721239128563961477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106,PodSandboxId:699ce739fe6ca4d1b4b158cf41ff1ac719699fa57d2b3109d79ac3eea632728b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721239125014872368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.kubernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47,PodSandboxId:34e758884b69c4fce261741223b7e26143eb367fa6ed14938c7eb87c5afea287,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721239105888131919,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb
7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d,PodSandboxId:3dc429606d85505f6988a036253115b79a69058203d271847a8a06b8eee06c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721239105807182680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations
:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a,PodSandboxId:bec9243ce95ed60e667350a889a7f4b3b9a0523ee5a4261a31fea8492a8cb0dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721239105832036373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f,PodSandboxId:cef1c1c90ad7bcd6131429de1911c31541baccb72239ea5517e9b3d46d6ca94a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721239105812608924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map
[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d060d51c-1de6-4936-a2a6-15c9d3bed1d1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.462450047Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=332c8012-2a7e-4027-93e5-0a84a67ce1cd name=/runtime.v1.RuntimeService/Version
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.462569577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=332c8012-2a7e-4027-93e5-0a84a67ce1cd name=/runtime.v1.RuntimeService/Version
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.463533123Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d14bb659-af15-4cbd-a533-580d2bd88242 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.463924260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721239758463901501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d14bb659-af15-4cbd-a533-580d2bd88242 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.464391703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4423f84-42c9-4d91-bca9-b983092392c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.464534875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4423f84-42c9-4d91-bca9-b983092392c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.464877707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4715a46e2baec137f37988949e5f783704acfefe3e92a3d4a0aa39dd54c648ca,PodSandboxId:8a810590d3716f6035bdd86963ffd02d2b98a3bec1491cd7afce399f2d77c915,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721239552802655844,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59bab707068cccd4ff807dfb4cbe1c9164ce49d80ecdb3334e035447c895132,PodSandboxId:43ad8250f0304816f7aca4c6eb7b33d619d35eceeb0728f21fcce0eeb1ed9f27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721239519346970397,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf7a6123dfc52831b70a4b7ab26667dc5dbfd3b6224dced4120c793504007930,PodSandboxId:3334be3daed4423efb4c5526619492349698fcf76ec835cf716d61803c2468e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239519337385540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126acaa753011c9c50ef72aaf9414bcf13f76b5f32b145885055dc58284112f0,PodSandboxId:130d890a0a2f25a870b2ad00d6a69f31bd2465561843ddc5f4561b6c17ffb3e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239519162543359,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},An
notations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a600dc9dc4cbfe810d7bfdddf4001a2a6835b4f561ddee8ec89a3b97c0781e7e,PodSandboxId:f5a233028b6dbe79a4f81ad15478623db2e0b0fc0266375c0785bfc00d0fe23e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239519092702782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.ku
bernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9b4a09a89d9644c160bc53340de6d77de564f71b2afa786f3a582fdabfda56,PodSandboxId:35a360b43087033a085d7395fb453963cb00fd9958c9e64e1ecac26da1336029,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239514321889051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f771f4846c09fc27b0dda60952111f83f446d6df7eaf2a8998a5a20c2489aa45,PodSandboxId:8d8ee5895be8cdf42d8a5d3315f4fd0e2d6953134f1db2151287c953b2f775fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239514267128470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e8e3edcb645f3c0ff2b2960f9eec7a22f72853f96d97b2a2f1a60774be4ecd,PodSandboxId:eddf0dc388923d38de15f221cada729d343eedc7bb7e6263b323c4494e610d2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239514276879924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85689b761c08356d8c72ddbb6741d7811846bc176e5620e1f16292d2405380d2,PodSandboxId:0d667fdc790e4295acfcd1853c0d6d179a94cbb3a2a6d2c1b8bb2fdf763ac335,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239514193889527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:036b3403e707124943825446aceca9e338fc1ad99d10a5fcb05ee5517fb831aa,PodSandboxId:f8e0d93c1dfef807c220fb730ad6a45f781d414dc379d0c5b88920d16ededd46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721239192830223235,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08,PodSandboxId:64f0544b273e837cc65e06e2daef1c2dff00a450bd15743d29d37db7f39428ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721239140324421303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1815402f04f919e11c3a96aaf379eccdbfe300319fe17e0acba5022a4aa426f7,PodSandboxId:4340bcf9e6fc2a296e9f277f39aebcf6b017024c2f0ccf7c4189a18216254786,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721239140265764506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},Annotations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef,PodSandboxId:af144f702171c92d505f826276bacfb71330149c01daa5fc2e1d2c2e2dac8889,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721239128563961477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106,PodSandboxId:699ce739fe6ca4d1b4b158cf41ff1ac719699fa57d2b3109d79ac3eea632728b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721239125014872368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.kubernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47,PodSandboxId:34e758884b69c4fce261741223b7e26143eb367fa6ed14938c7eb87c5afea287,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721239105888131919,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb
7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d,PodSandboxId:3dc429606d85505f6988a036253115b79a69058203d271847a8a06b8eee06c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721239105807182680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations
:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a,PodSandboxId:bec9243ce95ed60e667350a889a7f4b3b9a0523ee5a4261a31fea8492a8cb0dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721239105832036373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f,PodSandboxId:cef1c1c90ad7bcd6131429de1911c31541baccb72239ea5517e9b3d46d6ca94a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721239105812608924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map
[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4423f84-42c9-4d91-bca9-b983092392c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.502217782Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=090709c7-8baa-4493-8089-08efdd571508 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.502341115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=090709c7-8baa-4493-8089-08efdd571508 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.503527341Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec957bcc-1c6f-45cf-84f4-cfae2c97b1cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.503931355Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721239758503908305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143050,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec957bcc-1c6f-45cf-84f4-cfae2c97b1cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.504594403Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11ecbf27-3cca-444e-91df-9dccfc49331e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.504665670Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11ecbf27-3cca-444e-91df-9dccfc49331e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.505471548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4715a46e2baec137f37988949e5f783704acfefe3e92a3d4a0aa39dd54c648ca,PodSandboxId:8a810590d3716f6035bdd86963ffd02d2b98a3bec1491cd7afce399f2d77c915,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721239552802655844,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59bab707068cccd4ff807dfb4cbe1c9164ce49d80ecdb3334e035447c895132,PodSandboxId:43ad8250f0304816f7aca4c6eb7b33d619d35eceeb0728f21fcce0eeb1ed9f27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721239519346970397,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf7a6123dfc52831b70a4b7ab26667dc5dbfd3b6224dced4120c793504007930,PodSandboxId:3334be3daed4423efb4c5526619492349698fcf76ec835cf716d61803c2468e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239519337385540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126acaa753011c9c50ef72aaf9414bcf13f76b5f32b145885055dc58284112f0,PodSandboxId:130d890a0a2f25a870b2ad00d6a69f31bd2465561843ddc5f4561b6c17ffb3e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239519162543359,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},An
notations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a600dc9dc4cbfe810d7bfdddf4001a2a6835b4f561ddee8ec89a3b97c0781e7e,PodSandboxId:f5a233028b6dbe79a4f81ad15478623db2e0b0fc0266375c0785bfc00d0fe23e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239519092702782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.ku
bernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9b4a09a89d9644c160bc53340de6d77de564f71b2afa786f3a582fdabfda56,PodSandboxId:35a360b43087033a085d7395fb453963cb00fd9958c9e64e1ecac26da1336029,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239514321889051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f771f4846c09fc27b0dda60952111f83f446d6df7eaf2a8998a5a20c2489aa45,PodSandboxId:8d8ee5895be8cdf42d8a5d3315f4fd0e2d6953134f1db2151287c953b2f775fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239514267128470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e8e3edcb645f3c0ff2b2960f9eec7a22f72853f96d97b2a2f1a60774be4ecd,PodSandboxId:eddf0dc388923d38de15f221cada729d343eedc7bb7e6263b323c4494e610d2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239514276879924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85689b761c08356d8c72ddbb6741d7811846bc176e5620e1f16292d2405380d2,PodSandboxId:0d667fdc790e4295acfcd1853c0d6d179a94cbb3a2a6d2c1b8bb2fdf763ac335,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239514193889527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:036b3403e707124943825446aceca9e338fc1ad99d10a5fcb05ee5517fb831aa,PodSandboxId:f8e0d93c1dfef807c220fb730ad6a45f781d414dc379d0c5b88920d16ededd46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721239192830223235,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08,PodSandboxId:64f0544b273e837cc65e06e2daef1c2dff00a450bd15743d29d37db7f39428ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721239140324421303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1815402f04f919e11c3a96aaf379eccdbfe300319fe17e0acba5022a4aa426f7,PodSandboxId:4340bcf9e6fc2a296e9f277f39aebcf6b017024c2f0ccf7c4189a18216254786,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721239140265764506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},Annotations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef,PodSandboxId:af144f702171c92d505f826276bacfb71330149c01daa5fc2e1d2c2e2dac8889,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721239128563961477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106,PodSandboxId:699ce739fe6ca4d1b4b158cf41ff1ac719699fa57d2b3109d79ac3eea632728b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721239125014872368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.kubernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47,PodSandboxId:34e758884b69c4fce261741223b7e26143eb367fa6ed14938c7eb87c5afea287,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721239105888131919,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb
7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d,PodSandboxId:3dc429606d85505f6988a036253115b79a69058203d271847a8a06b8eee06c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721239105807182680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations
:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a,PodSandboxId:bec9243ce95ed60e667350a889a7f4b3b9a0523ee5a4261a31fea8492a8cb0dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721239105832036373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f,PodSandboxId:cef1c1c90ad7bcd6131429de1911c31541baccb72239ea5517e9b3d46d6ca94a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721239105812608924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map
[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11ecbf27-3cca-444e-91df-9dccfc49331e name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.528060260Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f73c06a8-3aef-433f-a960-7c8cbf60de6a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.529912795Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8a810590d3716f6035bdd86963ffd02d2b98a3bec1491cd7afce399f2d77c915,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-pkq4s,Uid:505e4353-4f57-49a2-b738-3a4a6393867a,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721239552685648740,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:18.552239632Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3334be3daed4423efb4c5526619492349698fcf76ec835cf716d61803c2468e7,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qmclk,Uid:2f2998e9-2aa3-4640-81e5-96bdadc07c15,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1721239518968234128,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:18.552228628Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:130d890a0a2f25a870b2ad00d6a69f31bd2465561843ddc5f4561b6c17ffb3e0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:700ef325-89ff-4051-a800-83e11439fcfb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721239518905191432,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},Annotations:map[string]stri
ng{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T18:05:18.552238350Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:43ad8250f0304816f7aca4c6eb7b33d619d35eceeb0728f21fcce0eeb1ed9f27,Metadata:&PodSandboxMetadata{Name:kindnet-r7gm7,Uid:59db5a4d-7403-430d-af09-5a42d354c16c,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1721239518903871069,Labels:map[string]string{app: kindnet,controller-revision-hash: 545f566499,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:18.552242334Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f5a233028b6dbe79a4f81ad15478623db2e0b0fc0266375c0785bfc00d0fe23e,Metadata:&PodSandboxMetadata{Name:kube-proxy-tp9f2,Uid:4463bfd0-32aa-4f9a-9012-09c438fa3629,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721239518884509443,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T18:05:18.552243283Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:35a360b43087033a085d7395fb453963cb00fd9958c9e64e1ecac26da1336029,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-866205,Uid:fe085e0b5526902bebd65e025af1d82e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721239514064614420,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fe085e0b5526902bebd65e025af1d82e,kubernetes.io/config.seen: 2024-07-17T18:05:13.551492105Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eddf0dc388923d38de15f221cada729d343eedc7bb7e6263b323c4494e610d2f,Metadata:&PodSandboxMetadata{Name:kube-controller-mana
ger-multinode-866205,Uid:4730de90479a812149c69541430472f4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721239514047469487,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4730de90479a812149c69541430472f4,kubernetes.io/config.seen: 2024-07-17T18:05:13.551491298Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0d667fdc790e4295acfcd1853c0d6d179a94cbb3a2a6d2c1b8bb2fdf763ac335,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-866205,Uid:9b07bd3166c1dd4ab2f296d2d209526f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721239514044527622,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-8
66205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.16:8443,kubernetes.io/config.hash: 9b07bd3166c1dd4ab2f296d2d209526f,kubernetes.io/config.seen: 2024-07-17T18:05:13.551490149Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8d8ee5895be8cdf42d8a5d3315f4fd0e2d6953134f1db2151287c953b2f775fd,Metadata:&PodSandboxMetadata{Name:etcd-multinode-866205,Uid:6725d65a2a0c94758ba801028144bdb7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721239514029731404,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.16:2379,kubernete
s.io/config.hash: 6725d65a2a0c94758ba801028144bdb7,kubernetes.io/config.seen: 2024-07-17T18:05:13.551486011Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f8e0d93c1dfef807c220fb730ad6a45f781d414dc379d0c5b88920d16ededd46,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-pkq4s,Uid:505e4353-4f57-49a2-b738-3a4a6393867a,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721239190122590215,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T17:59:49.810942045Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64f0544b273e837cc65e06e2daef1c2dff00a450bd15743d29d37db7f39428ff,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qmclk,Uid:2f2998e9-2aa3-4640-81e5-96bdadc07c15,Namespace:kube-system,Attempt:
0,},State:SANDBOX_NOTREADY,CreatedAt:1721239140139557552,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T17:58:59.829149760Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4340bcf9e6fc2a296e9f277f39aebcf6b017024c2f0ccf7c4189a18216254786,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:700ef325-89ff-4051-a800-83e11439fcfb,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721239140132233753,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},Annotations:map[st
ring]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-17T17:58:59.825450852Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af144f702171c92d505f826276bacfb71330149c01daa5fc2e1d2c2e2dac8889,Metadata:&PodSandboxMetadata{Name:kindnet-r7gm7,Uid:59db5a4d-7403-430d-af09-5a42d354c16c,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721239124698614314,Labels:map[string]string{app: kindnet,controller-revision-hash: 545f566499,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T17:58:43.785667167Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:699ce739fe6ca4d1b4b158cf41ff1ac719699fa57d2b3109d79ac3eea632728b,Metadata:&PodSandboxMetadata{Name:kube-proxy-tp9f2,Uid:4463bfd0-32aa-4f9a-9012-09c438fa3629,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721239124683402823,Labels:map[string]string{controller-revision-hash: 669fc44fbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,k8s-app: kube-
proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-17T17:58:43.771807612Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3dc429606d85505f6988a036253115b79a69058203d271847a8a06b8eee06c87,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-866205,Uid:9b07bd3166c1dd4ab2f296d2d209526f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721239105658091216,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.16:8443,kubernetes.io/config.hash: 9b07bd3166c1dd4ab2f296d2d209526f,kubernetes.io/config.seen: 2024-07-17T17:58:25.182009410Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cef1c1c90ad7bcd61
31429de1911c31541baccb72239ea5517e9b3d46d6ca94a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-866205,Uid:4730de90479a812149c69541430472f4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721239105657768860,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4730de90479a812149c69541430472f4,kubernetes.io/config.seen: 2024-07-17T17:58:25.182010714Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bec9243ce95ed60e667350a889a7f4b3b9a0523ee5a4261a31fea8492a8cb0dd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-866205,Uid:fe085e0b5526902bebd65e025af1d82e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721239105649124068,Labels:map[string]string{co
mponent: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fe085e0b5526902bebd65e025af1d82e,kubernetes.io/config.seen: 2024-07-17T17:58:25.182011616Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:34e758884b69c4fce261741223b7e26143eb367fa6ed14938c7eb87c5afea287,Metadata:&PodSandboxMetadata{Name:etcd-multinode-866205,Uid:6725d65a2a0c94758ba801028144bdb7,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1721239105645822671,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://1
92.168.39.16:2379,kubernetes.io/config.hash: 6725d65a2a0c94758ba801028144bdb7,kubernetes.io/config.seen: 2024-07-17T17:58:25.182005262Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f73c06a8-3aef-433f-a960-7c8cbf60de6a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.531111263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8dad0262-151d-4faa-800c-afd4f4921e0f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.531163813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8dad0262-151d-4faa-800c-afd4f4921e0f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:09:18 multinode-866205 crio[2868]: time="2024-07-17 18:09:18.531573638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4715a46e2baec137f37988949e5f783704acfefe3e92a3d4a0aa39dd54c648ca,PodSandboxId:8a810590d3716f6035bdd86963ffd02d2b98a3bec1491cd7afce399f2d77c915,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721239552802655844,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59bab707068cccd4ff807dfb4cbe1c9164ce49d80ecdb3334e035447c895132,PodSandboxId:43ad8250f0304816f7aca4c6eb7b33d619d35eceeb0728f21fcce0eeb1ed9f27,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721239519346970397,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf7a6123dfc52831b70a4b7ab26667dc5dbfd3b6224dced4120c793504007930,PodSandboxId:3334be3daed4423efb4c5526619492349698fcf76ec835cf716d61803c2468e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721239519337385540,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:126acaa753011c9c50ef72aaf9414bcf13f76b5f32b145885055dc58284112f0,PodSandboxId:130d890a0a2f25a870b2ad00d6a69f31bd2465561843ddc5f4561b6c17ffb3e0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721239519162543359,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},An
notations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a600dc9dc4cbfe810d7bfdddf4001a2a6835b4f561ddee8ec89a3b97c0781e7e,PodSandboxId:f5a233028b6dbe79a4f81ad15478623db2e0b0fc0266375c0785bfc00d0fe23e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:1721239519092702782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.ku
bernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9b4a09a89d9644c160bc53340de6d77de564f71b2afa786f3a582fdabfda56,PodSandboxId:35a360b43087033a085d7395fb453963cb00fd9958c9e64e1ecac26da1336029,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721239514321889051,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 838e9a2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f771f4846c09fc27b0dda60952111f83f446d6df7eaf2a8998a5a20c2489aa45,PodSandboxId:8d8ee5895be8cdf42d8a5d3315f4fd0e2d6953134f1db2151287c953b2f775fd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721239514267128470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e8e3edcb645f3c0ff2b2960f9eec7a22f72853f96d97b2a2f1a60774be4ecd,PodSandboxId:eddf0dc388923d38de15f221cada729d343eedc7bb7e6263b323c4494e610d2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721239514276879924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85689b761c08356d8c72ddbb6741d7811846bc176e5620e1f16292d2405380d2,PodSandboxId:0d667fdc790e4295acfcd1853c0d6d179a94cbb3a2a6d2c1b8bb2fdf763ac335,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721239514193889527,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:036b3403e707124943825446aceca9e338fc1ad99d10a5fcb05ee5517fb831aa,PodSandboxId:f8e0d93c1dfef807c220fb730ad6a45f781d414dc379d0c5b88920d16ededd46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721239192830223235,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-pkq4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 505e4353-4f57-49a2-b738-3a4a6393867a,},Annotations:map[string]string{io.kubernetes.container.hash: d62b5309,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08,PodSandboxId:64f0544b273e837cc65e06e2daef1c2dff00a450bd15743d29d37db7f39428ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721239140324421303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qmclk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f2998e9-2aa3-4640-81e5-96bdadc07c15,},Annotations:map[string]string{io.kubernetes.container.hash: 2c29ba10,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1815402f04f919e11c3a96aaf379eccdbfe300319fe17e0acba5022a4aa426f7,PodSandboxId:4340bcf9e6fc2a296e9f277f39aebcf6b017024c2f0ccf7c4189a18216254786,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721239140265764506,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 700ef325-89ff-4051-a800-83e11439fcfb,},Annotations:map[string]string{io.kubernetes.container.hash: b2a3305e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef,PodSandboxId:af144f702171c92d505f826276bacfb71330149c01daa5fc2e1d2c2e2dac8889,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721239128563961477,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r7gm7,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 59db5a4d-7403-430d-af09-5a42d354c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 6e8eb416,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106,PodSandboxId:699ce739fe6ca4d1b4b158cf41ff1ac719699fa57d2b3109d79ac3eea632728b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_EXITED,CreatedAt:1721239125014872368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tp9f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4463bfd0-32aa-4f9a-9012-09c438fa3629,},Annotations:map[string]string{io.kubernetes.container.hash: 30e5870e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47,PodSandboxId:34e758884b69c4fce261741223b7e26143eb367fa6ed14938c7eb87c5afea287,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721239105888131919,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6725d65a2a0c94758ba801028144bdb
7,},Annotations:map[string]string{io.kubernetes.container.hash: 483df50f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d,PodSandboxId:3dc429606d85505f6988a036253115b79a69058203d271847a8a06b8eee06c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721239105807182680,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b07bd3166c1dd4ab2f296d2d209526f,},Annotations
:map[string]string{io.kubernetes.container.hash: 31b610c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a,PodSandboxId:bec9243ce95ed60e667350a889a7f4b3b9a0523ee5a4261a31fea8492a8cb0dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_EXITED,CreatedAt:1721239105832036373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe085e0b5526902bebd65e025af1d82e,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f,PodSandboxId:cef1c1c90ad7bcd6131429de1911c31541baccb72239ea5517e9b3d46d6ca94a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_EXITED,CreatedAt:1721239105812608924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-866205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4730de90479a812149c69541430472f4,},Annotations:map
[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8dad0262-151d-4faa-800c-afd4f4921e0f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4715a46e2baec       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   8a810590d3716       busybox-fc5497c4f-pkq4s
	d59bab707068c       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      3 minutes ago       Running             kindnet-cni               1                   43ad8250f0304       kindnet-r7gm7
	cf7a6123dfc52       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   3334be3daed44       coredns-7db6d8ff4d-qmclk
	126acaa753011       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   130d890a0a2f2       storage-provisioner
	a600dc9dc4cbf       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      3 minutes ago       Running             kube-proxy                1                   f5a233028b6db       kube-proxy-tp9f2
	ee9b4a09a89d9       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      4 minutes ago       Running             kube-scheduler            1                   35a360b430870       kube-scheduler-multinode-866205
	10e8e3edcb645       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      4 minutes ago       Running             kube-controller-manager   1                   eddf0dc388923       kube-controller-manager-multinode-866205
	f771f4846c09f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   8d8ee5895be8c       etcd-multinode-866205
	85689b761c083       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      4 minutes ago       Running             kube-apiserver            1                   0d667fdc790e4       kube-apiserver-multinode-866205
	036b3403e7071       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   f8e0d93c1dfef       busybox-fc5497c4f-pkq4s
	4d6289bad2649       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   64f0544b273e8       coredns-7db6d8ff4d-qmclk
	1815402f04f91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   4340bcf9e6fc2       storage-provisioner
	6a18586244f14       docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115    10 minutes ago      Exited              kindnet-cni               0                   af144f702171c       kindnet-r7gm7
	53d93ab94e35d       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772                                      10 minutes ago      Exited              kube-proxy                0                   699ce739fe6ca       kube-proxy-tp9f2
	390153e91db47       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   34e758884b69c       etcd-multinode-866205
	bf1f3ab84c4d1       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940                                      10 minutes ago      Exited              kube-scheduler            0                   bec9243ce95ed       kube-scheduler-multinode-866205
	5348c0dad6a9d       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974                                      10 minutes ago      Exited              kube-controller-manager   0                   cef1c1c90ad7b       kube-controller-manager-multinode-866205
	768cb64a493ab       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe                                      10 minutes ago      Exited              kube-apiserver            0                   3dc429606d855       kube-apiserver-multinode-866205
	
	
	==> coredns [4d6289bad2649585febeb87dc03ad5dc775b7790bb72598d2e2d6c977eb89b08] <==
	[INFO] 10.244.0.3:49209 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001705136s
	[INFO] 10.244.0.3:57868 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103651s
	[INFO] 10.244.0.3:45699 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101576s
	[INFO] 10.244.0.3:41906 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00102114s
	[INFO] 10.244.0.3:51373 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061532s
	[INFO] 10.244.0.3:48020 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063962s
	[INFO] 10.244.0.3:55300 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068727s
	[INFO] 10.244.1.2:55232 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168559s
	[INFO] 10.244.1.2:48518 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121748s
	[INFO] 10.244.1.2:51994 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090624s
	[INFO] 10.244.1.2:52302 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093904s
	[INFO] 10.244.0.3:45649 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115979s
	[INFO] 10.244.0.3:49954 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069946s
	[INFO] 10.244.0.3:40368 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066178s
	[INFO] 10.244.0.3:43249 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083478s
	[INFO] 10.244.1.2:59587 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143079s
	[INFO] 10.244.1.2:59246 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00015225s
	[INFO] 10.244.1.2:45237 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000150468s
	[INFO] 10.244.1.2:55372 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000172366s
	[INFO] 10.244.0.3:47268 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107413s
	[INFO] 10.244.0.3:34574 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153236s
	[INFO] 10.244.0.3:58321 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000049605s
	[INFO] 10.244.0.3:51214 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056821s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cf7a6123dfc52831b70a4b7ab26667dc5dbfd3b6224dced4120c793504007930] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46587 - 62762 "HINFO IN 9208503358798563584.6106408047153594360. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023371479s
	
	
	==> describe nodes <==
	Name:               multinode-866205
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-866205
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=multinode-866205
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T17_58_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 17:58:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-866205
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:09:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:05:17 +0000   Wed, 17 Jul 2024 17:58:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:05:17 +0000   Wed, 17 Jul 2024 17:58:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:05:17 +0000   Wed, 17 Jul 2024 17:58:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:05:17 +0000   Wed, 17 Jul 2024 17:58:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    multinode-866205
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64374630be4d4569b107ad30571f6123
	  System UUID:                64374630-be4d-4569-b107-ad30571f6123
	  Boot ID:                    4ba17509-dbc9-4811-8fcc-26405b310e79
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pkq4s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m29s
	  kube-system                 coredns-7db6d8ff4d-qmclk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-866205                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-r7gm7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-866205             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-866205    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-tp9f2                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-866205             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)    kubelet          Node multinode-866205 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)    kubelet          Node multinode-866205 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)    kubelet          Node multinode-866205 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-866205 event: Registered Node multinode-866205 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-866205 status is now: NodeReady
	  Normal  Starting                 4m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m5s)  kubelet          Node multinode-866205 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m5s)  kubelet          Node multinode-866205 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m5s)  kubelet          Node multinode-866205 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m48s                node-controller  Node multinode-866205 event: Registered Node multinode-866205 in Controller
	
	
	Name:               multinode-866205-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-866205-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=multinode-866205
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_17T18_05_56_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:05:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-866205-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:06:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 17 Jul 2024 18:06:26 +0000   Wed, 17 Jul 2024 18:07:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 17 Jul 2024 18:06:26 +0000   Wed, 17 Jul 2024 18:07:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 17 Jul 2024 18:06:26 +0000   Wed, 17 Jul 2024 18:07:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 17 Jul 2024 18:06:26 +0000   Wed, 17 Jul 2024 18:07:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.113
	  Hostname:    multinode-866205-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 968982e915f44dbb99c84c4f9e1ee63f
	  System UUID:                968982e9-15f4-4dbb-99c8-4c4f9e1ee63f
	  Boot ID:                    8856b435-38eb-4549-82d6-23623f5fb96f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bs4fx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kindnet-fwnkd              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m52s
	  kube-system                 kube-proxy-sq4xn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m47s                  kube-proxy       
	  Normal  Starting                 3m18s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m52s (x2 over 9m52s)  kubelet          Node multinode-866205-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m52s (x2 over 9m52s)  kubelet          Node multinode-866205-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m52s (x2 over 9m52s)  kubelet          Node multinode-866205-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                9m31s                  kubelet          Node multinode-866205-m02 status is now: NodeReady
	  Normal  Starting                 3m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m23s (x2 over 3m23s)  kubelet          Node multinode-866205-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x2 over 3m23s)  kubelet          Node multinode-866205-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x2 over 3m23s)  kubelet          Node multinode-866205-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m18s                  node-controller  Node multinode-866205-m02 event: Registered Node multinode-866205-m02 in Controller
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-866205-m02 status is now: NodeReady
	  Normal  NodeNotReady             98s                    node-controller  Node multinode-866205-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.061087] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.172999] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.108411] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.253097] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +3.780993] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +5.517334] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.054400] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.496148] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.076667] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.356585] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.709909] systemd-fstab-generator[1474]: Ignoring "noauto" option for root device
	[Jul17 17:59] kauditd_printk_skb: 60 callbacks suppressed
	[ +50.020259] kauditd_printk_skb: 12 callbacks suppressed
	[Jul17 18:05] systemd-fstab-generator[2785]: Ignoring "noauto" option for root device
	[  +0.143886] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.160706] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.135850] systemd-fstab-generator[2823]: Ignoring "noauto" option for root device
	[  +0.262632] systemd-fstab-generator[2851]: Ignoring "noauto" option for root device
	[  +6.050344] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[  +0.078991] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.693520] systemd-fstab-generator[3074]: Ignoring "noauto" option for root device
	[  +5.636976] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.799315] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.522212] systemd-fstab-generator[3909]: Ignoring "noauto" option for root device
	[ +21.443514] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [390153e91db471f76a5c6245753d6bc1f5e47db5fedbde56d35ed2b13c44cf47] <==
	{"level":"warn","ts":"2024-07-17T17:59:35.402357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.847797ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T17:59:35.402627Z","caller":"traceutil/trace.go:171","msg":"trace[758796295] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:489; }","duration":"216.189848ms","start":"2024-07-17T17:59:35.186425Z","end":"2024-07-17T17:59:35.402614Z","steps":["trace[758796295] 'count revisions from in-memory index tree'  (duration: 215.751372ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:59:35.531545Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.101749ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1163213766002895341 > lease_revoke:<id:102490c1d8726951>","response":"size:28"}
	{"level":"info","ts":"2024-07-17T17:59:35.531619Z","caller":"traceutil/trace.go:171","msg":"trace[1308881342] linearizableReadLoop","detail":"{readStateIndex:516; appliedIndex:515; }","duration":"258.125534ms","start":"2024-07-17T17:59:35.273481Z","end":"2024-07-17T17:59:35.531606Z","steps":["trace[1308881342] 'read index received'  (duration: 65.933885ms)","trace[1308881342] 'applied index is now lower than readState.Index'  (duration: 192.190641ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T17:59:35.531713Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.241194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-866205-m02\" ","response":"range_response_count:1 size:3023"}
	{"level":"info","ts":"2024-07-17T17:59:35.531747Z","caller":"traceutil/trace.go:171","msg":"trace[1242444334] range","detail":"{range_begin:/registry/minions/multinode-866205-m02; range_end:; response_count:1; response_revision:489; }","duration":"258.29749ms","start":"2024-07-17T17:59:35.273441Z","end":"2024-07-17T17:59:35.531738Z","steps":["trace[1242444334] 'agreement among raft nodes before linearized reading'  (duration: 258.23102ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T17:59:35.531817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.119546ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-17T17:59:35.533117Z","caller":"traceutil/trace.go:171","msg":"trace[1973191743] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:489; }","duration":"118.441402ms","start":"2024-07-17T17:59:35.414659Z","end":"2024-07-17T17:59:35.533101Z","steps":["trace[1973191743] 'agreement among raft nodes before linearized reading'  (duration: 117.057126ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:00:20.365912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.815402ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1163213766002895676 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-866205-m03.17e311f37e69d9b0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-866205-m03.17e311f37e69d9b0\" value_size:646 lease:1163213766002895309 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T18:00:20.366182Z","caller":"traceutil/trace.go:171","msg":"trace[602514586] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"165.226418ms","start":"2024-07-17T18:00:20.20094Z","end":"2024-07-17T18:00:20.366166Z","steps":["trace[602514586] 'process raft request'  (duration: 165.170879ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T18:00:20.366356Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.110539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-866205-m03\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-07-17T18:00:20.366395Z","caller":"traceutil/trace.go:171","msg":"trace[612566261] range","detail":"{range_begin:/registry/minions/multinode-866205-m03; range_end:; response_count:1; response_revision:580; }","duration":"193.239152ms","start":"2024-07-17T18:00:20.173149Z","end":"2024-07-17T18:00:20.366388Z","steps":["trace[612566261] 'agreement among raft nodes before linearized reading'  (duration: 193.08749ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:00:20.36617Z","caller":"traceutil/trace.go:171","msg":"trace[568527838] transaction","detail":"{read_only:false; response_revision:579; number_of_response:1; }","duration":"241.648273ms","start":"2024-07-17T18:00:20.124493Z","end":"2024-07-17T18:00:20.366141Z","steps":["trace[568527838] 'process raft request'  (duration: 86.489137ms)","trace[568527838] 'compare'  (duration: 154.710804ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T18:00:20.366225Z","caller":"traceutil/trace.go:171","msg":"trace[563045572] linearizableReadLoop","detail":"{readStateIndex:615; appliedIndex:614; }","duration":"193.039397ms","start":"2024-07-17T18:00:20.17318Z","end":"2024-07-17T18:00:20.366219Z","steps":["trace[563045572] 'read index received'  (duration: 37.810691ms)","trace[563045572] 'applied index is now lower than readState.Index'  (duration: 155.227949ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T18:01:13.861061Z","caller":"traceutil/trace.go:171","msg":"trace[652025700] transaction","detail":"{read_only:false; response_revision:706; number_of_response:1; }","duration":"167.412828ms","start":"2024-07-17T18:01:13.693629Z","end":"2024-07-17T18:01:13.861042Z","steps":["trace[652025700] 'process raft request'  (duration: 167.292205ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T18:03:33.686853Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-17T18:03:33.686953Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-866205","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.16:2380"],"advertise-client-urls":["https://192.168.39.16:2379"]}
	{"level":"warn","ts":"2024-07-17T18:03:33.687071Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:03:33.687192Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:03:33.733168Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.16:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T18:03:33.733253Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.16:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T18:03:33.734292Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b6c76b3131c1024","current-leader-member-id":"b6c76b3131c1024"}
	{"level":"info","ts":"2024-07-17T18:03:33.73686Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-17T18:03:33.737032Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-17T18:03:33.737093Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-866205","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.16:2380"],"advertise-client-urls":["https://192.168.39.16:2379"]}
	
	
	==> etcd [f771f4846c09fc27b0dda60952111f83f446d6df7eaf2a8998a5a20c2489aa45] <==
	{"level":"info","ts":"2024-07-17T18:05:14.66944Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T18:05:14.669597Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T18:05:14.675909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 switched to configuration voters=(823163343393787940)"}
	{"level":"info","ts":"2024-07-17T18:05:14.676299Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cad58bbf0f3daddf","local-member-id":"b6c76b3131c1024","added-peer-id":"b6c76b3131c1024","added-peer-peer-urls":["https://192.168.39.16:2380"]}
	{"level":"info","ts":"2024-07-17T18:05:14.677686Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cad58bbf0f3daddf","local-member-id":"b6c76b3131c1024","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:05:14.677758Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:05:14.693771Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T18:05:14.693925Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-17T18:05:14.694601Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.16:2380"}
	{"level":"info","ts":"2024-07-17T18:05:14.696046Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b6c76b3131c1024","initial-advertise-peer-urls":["https://192.168.39.16:2380"],"listen-peer-urls":["https://192.168.39.16:2380"],"advertise-client-urls":["https://192.168.39.16:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.16:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T18:05:14.696222Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T18:05:16.528998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T18:05:16.529128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:05:16.529178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgPreVoteResp from b6c76b3131c1024 at term 2"}
	{"level":"info","ts":"2024-07-17T18:05:16.529221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T18:05:16.529245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 received MsgVoteResp from b6c76b3131c1024 at term 3"}
	{"level":"info","ts":"2024-07-17T18:05:16.529277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6c76b3131c1024 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T18:05:16.529341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b6c76b3131c1024 elected leader b6c76b3131c1024 at term 3"}
	{"level":"info","ts":"2024-07-17T18:05:16.534773Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:05:16.534844Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T18:05:16.534881Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:05:16.534615Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b6c76b3131c1024","local-member-attributes":"{Name:multinode-866205 ClientURLs:[https://192.168.39.16:2379]}","request-path":"/0/members/b6c76b3131c1024/attributes","cluster-id":"cad58bbf0f3daddf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:05:16.535403Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:05:16.536967Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.16:2379"}
	{"level":"info","ts":"2024-07-17T18:05:16.537294Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:09:18 up 11 min,  0 users,  load average: 0.16, 0.11, 0.07
	Linux multinode-866205 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6a18586244f141247fc30c628575a1445b06f7d7ed3827e61328c357b88813ef] <==
	I0717 18:02:49.477015       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:02:59.476762       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:02:59.476809       1 main.go:303] handling current node
	I0717 18:02:59.476828       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:02:59.476833       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:02:59.476993       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:02:59.477018       1 main.go:326] Node multinode-866205-m03 has CIDR [10.244.3.0/24] 
	I0717 18:03:09.486030       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:03:09.486076       1 main.go:303] handling current node
	I0717 18:03:09.486096       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:03:09.486101       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:03:09.486246       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:03:09.486353       1 main.go:326] Node multinode-866205-m03 has CIDR [10.244.3.0/24] 
	I0717 18:03:19.485455       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:03:19.485565       1 main.go:303] handling current node
	I0717 18:03:19.485601       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:03:19.485620       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:03:19.485759       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:03:19.485782       1 main.go:326] Node multinode-866205-m03 has CIDR [10.244.3.0/24] 
	I0717 18:03:29.481021       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:03:29.481201       1 main.go:303] handling current node
	I0717 18:03:29.481269       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:03:29.481296       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:03:29.481573       1 main.go:299] Handling node with IPs: map[192.168.39.78:{}]
	I0717 18:03:29.481600       1 main.go:326] Node multinode-866205-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d59bab707068cccd4ff807dfb4cbe1c9164ce49d80ecdb3334e035447c895132] <==
	I0717 18:08:10.189869       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:08:20.190511       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:08:20.190567       1 main.go:303] handling current node
	I0717 18:08:20.190582       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:08:20.190588       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:08:30.189709       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:08:30.189788       1 main.go:303] handling current node
	I0717 18:08:30.189814       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:08:30.189823       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:08:40.193602       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:08:40.193710       1 main.go:303] handling current node
	I0717 18:08:40.193739       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:08:40.193757       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:08:50.192712       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:08:50.192739       1 main.go:303] handling current node
	I0717 18:08:50.192754       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:08:50.192759       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:09:00.192742       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:09:00.192783       1 main.go:303] handling current node
	I0717 18:09:00.192800       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:09:00.192806       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	I0717 18:09:10.198457       1 main.go:299] Handling node with IPs: map[192.168.39.16:{}]
	I0717 18:09:10.198617       1 main.go:303] handling current node
	I0717 18:09:10.198650       1 main.go:299] Handling node with IPs: map[192.168.39.113:{}]
	I0717 18:09:10.198669       1 main.go:326] Node multinode-866205-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [768cb64a493abf06583f77faed008c55d94dd2cfffa3580aae2dbb5850ab0f2d] <==
	W0717 18:03:33.723380       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723425       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723456       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723504       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723548       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723570       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723617       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723659       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723707       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723729       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723781       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723817       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723857       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723902       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723935       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723828       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.724015       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.724075       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723902       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723712       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723552       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723439       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723917       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.723978       1 logging.go:59] [core] [Channel #5 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:03:33.724174       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [85689b761c08356d8c72ddbb6741d7811846bc176e5620e1f16292d2405380d2] <==
	I0717 18:05:17.843102       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 18:05:17.843634       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 18:05:17.843690       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 18:05:17.843822       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 18:05:17.844561       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 18:05:17.844714       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 18:05:17.844790       1 aggregator.go:165] initial CRD sync complete...
	I0717 18:05:17.844830       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 18:05:17.844852       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 18:05:17.844874       1 cache.go:39] Caches are synced for autoregister controller
	I0717 18:05:17.844971       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 18:05:17.849759       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0717 18:05:17.851879       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0717 18:05:17.908001       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 18:05:17.914540       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 18:05:17.914571       1 policy_source.go:224] refreshing policies
	I0717 18:05:17.991010       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 18:05:18.745703       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 18:05:20.133012       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 18:05:20.245220       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 18:05:20.261031       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 18:05:20.330278       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 18:05:20.336293       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 18:05:30.685967       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 18:05:30.768857       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [10e8e3edcb645f3c0ff2b2960f9eec7a22f72853f96d97b2a2f1a60774be4ecd] <==
	I0717 18:05:51.575793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.865µs"
	I0717 18:05:55.808943       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-866205-m02\" does not exist"
	I0717 18:05:55.820684       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-866205-m02" podCIDRs=["10.244.1.0/24"]
	I0717 18:05:56.647009       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.506µs"
	I0717 18:05:57.774638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.923µs"
	I0717 18:05:57.780044       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.408µs"
	I0717 18:05:57.785356       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="85.355µs"
	I0717 18:05:57.787350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.489µs"
	I0717 18:06:14.633782       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:06:14.653875       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.589µs"
	I0717 18:06:14.666408       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.049µs"
	I0717 18:06:18.191570       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.125477ms"
	I0717 18:06:18.191829       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.657µs"
	I0717 18:06:32.473403       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:06:33.519797       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:06:33.519917       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-866205-m03\" does not exist"
	I0717 18:06:33.537150       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-866205-m03" podCIDRs=["10.244.2.0/24"]
	I0717 18:06:52.338980       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m03"
	I0717 18:06:57.355801       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:07:40.690745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.525486ms"
	I0717 18:07:40.690990       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.592µs"
	I0717 18:07:50.621518       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-54x54"
	I0717 18:07:50.646974       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-54x54"
	I0717 18:07:50.647016       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-sgnbd"
	I0717 18:07:50.668513       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-sgnbd"
	
	
	==> kube-controller-manager [5348c0dad6a9debaaf0a993b44a78f343dbb687c1688f7b2ef79f06c7c2fff1f] <==
	I0717 17:59:03.555927       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0717 17:59:26.774994       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-866205-m02\" does not exist"
	I0717 17:59:26.789646       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-866205-m02" podCIDRs=["10.244.1.0/24"]
	I0717 17:59:28.560105       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-866205-m02"
	I0717 17:59:47.364360       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 17:59:49.826001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.693694ms"
	I0717 17:59:49.838498       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.214519ms"
	I0717 17:59:49.838580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.031µs"
	I0717 17:59:53.004082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.085633ms"
	I0717 17:59:53.004474       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.202µs"
	I0717 17:59:53.583170       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.493917ms"
	I0717 17:59:53.583254       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.96µs"
	I0717 18:00:20.370944       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-866205-m03\" does not exist"
	I0717 18:00:20.371066       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:00:20.437396       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-866205-m03" podCIDRs=["10.244.2.0/24"]
	I0717 18:00:23.602026       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-866205-m03"
	I0717 18:00:40.674287       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:01:08.587553       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:01:09.539448       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-866205-m03\" does not exist"
	I0717 18:01:09.540162       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:01:09.551251       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-866205-m03" podCIDRs=["10.244.3.0/24"]
	I0717 18:01:28.466433       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:02:13.651645       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-866205-m02"
	I0717 18:02:13.716849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.40575ms"
	I0717 18:02:13.717077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.81µs"
	
	
	==> kube-proxy [53d93ab94e35d5397d55262b67fa3c33f90f81526850cde8a3953834848c9106] <==
	I0717 17:58:45.289683       1 server_linux.go:69] "Using iptables proxy"
	I0717 17:58:45.299664       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0717 17:58:45.447485       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 17:58:45.447537       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 17:58:45.447553       1 server_linux.go:165] "Using iptables Proxier"
	I0717 17:58:45.454761       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 17:58:45.454986       1 server.go:872] "Version info" version="v1.30.2"
	I0717 17:58:45.455016       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 17:58:45.456797       1 config.go:192] "Starting service config controller"
	I0717 17:58:45.456819       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 17:58:45.456863       1 config.go:101] "Starting endpoint slice config controller"
	I0717 17:58:45.456867       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 17:58:45.457278       1 config.go:319] "Starting node config controller"
	I0717 17:58:45.457342       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 17:58:45.557766       1 shared_informer.go:320] Caches are synced for node config
	I0717 17:58:45.557809       1 shared_informer.go:320] Caches are synced for service config
	I0717 17:58:45.557848       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a600dc9dc4cbfe810d7bfdddf4001a2a6835b4f561ddee8ec89a3b97c0781e7e] <==
	I0717 18:05:19.379216       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:05:19.405997       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.16"]
	I0717 18:05:19.465214       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:05:19.465264       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:05:19.465280       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:05:19.467861       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:05:19.468090       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:05:19.468110       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:05:19.470483       1 config.go:192] "Starting service config controller"
	I0717 18:05:19.470551       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:05:19.470595       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:05:19.470611       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:05:19.471111       1 config.go:319] "Starting node config controller"
	I0717 18:05:19.471136       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:05:19.571070       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:05:19.571258       1 shared_informer.go:320] Caches are synced for node config
	I0717 18:05:19.571424       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [bf1f3ab84c4d182f6b7e66aaa9d7c152bc4cfe15c6e3ecb5b7ade1d12158fc8a] <==
	E0717 17:58:28.291576       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 17:58:28.290811       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 17:58:28.291624       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 17:58:28.290881       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 17:58:28.291670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 17:58:28.291018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 17:58:28.291739       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 17:58:28.291032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 17:58:28.291791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 17:58:28.291146       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 17:58:28.291838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 17:58:29.229248       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 17:58:29.229427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 17:58:29.290808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 17:58:29.290918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 17:58:29.292966       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 17:58:29.293066       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 17:58:29.381218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 17:58:29.381362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 17:58:29.454791       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 17:58:29.454902       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 17:58:29.471086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 17:58:29.471130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0717 17:58:29.886522       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 18:03:33.698671       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ee9b4a09a89d9644c160bc53340de6d77de564f71b2afa786f3a582fdabfda56] <==
	W0717 18:05:17.817050       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:05:17.817059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 18:05:17.817128       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:05:17.817151       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 18:05:17.817204       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:05:17.817213       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:05:17.817272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:05:17.817295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:05:17.817399       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:05:17.817423       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 18:05:17.817480       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:05:17.817503       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 18:05:17.817556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:05:17.817578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:05:17.817693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:05:17.817715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:05:17.817787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:05:17.817809       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:05:17.817861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:05:17.817885       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 18:05:17.817934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:05:17.817956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:05:17.817966       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:05:17.817971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0717 18:05:18.807376       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.594155    3081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4463bfd0-32aa-4f9a-9012-09c438fa3629-xtables-lock\") pod \"kube-proxy-tp9f2\" (UID: \"4463bfd0-32aa-4f9a-9012-09c438fa3629\") " pod="kube-system/kube-proxy-tp9f2"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.594204    3081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/700ef325-89ff-4051-a800-83e11439fcfb-tmp\") pod \"storage-provisioner\" (UID: \"700ef325-89ff-4051-a800-83e11439fcfb\") " pod="kube-system/storage-provisioner"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.594271    3081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/59db5a4d-7403-430d-af09-5a42d354c16c-cni-cfg\") pod \"kindnet-r7gm7\" (UID: \"59db5a4d-7403-430d-af09-5a42d354c16c\") " pod="kube-system/kindnet-r7gm7"
	Jul 17 18:05:18 multinode-866205 kubelet[3081]: I0717 18:05:18.594386    3081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59db5a4d-7403-430d-af09-5a42d354c16c-xtables-lock\") pod \"kindnet-r7gm7\" (UID: \"59db5a4d-7403-430d-af09-5a42d354c16c\") " pod="kube-system/kindnet-r7gm7"
	Jul 17 18:05:22 multinode-866205 kubelet[3081]: I0717 18:05:22.857250    3081 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 17 18:06:13 multinode-866205 kubelet[3081]: E0717 18:06:13.631667    3081 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:06:13 multinode-866205 kubelet[3081]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:06:13 multinode-866205 kubelet[3081]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:06:13 multinode-866205 kubelet[3081]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:06:13 multinode-866205 kubelet[3081]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:07:13 multinode-866205 kubelet[3081]: E0717 18:07:13.632796    3081 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:07:13 multinode-866205 kubelet[3081]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:07:13 multinode-866205 kubelet[3081]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:07:13 multinode-866205 kubelet[3081]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:07:13 multinode-866205 kubelet[3081]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:08:13 multinode-866205 kubelet[3081]: E0717 18:08:13.633533    3081 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:08:13 multinode-866205 kubelet[3081]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:08:13 multinode-866205 kubelet[3081]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:08:13 multinode-866205 kubelet[3081]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:08:13 multinode-866205 kubelet[3081]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:09:13 multinode-866205 kubelet[3081]: E0717 18:09:13.635339    3081 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:09:13 multinode-866205 kubelet[3081]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:09:13 multinode-866205 kubelet[3081]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:09:13 multinode-866205 kubelet[3081]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:09:13 multinode-866205 kubelet[3081]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:09:18.120012   52793 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19283-14386/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-866205 -n multinode-866205
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-866205 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.17s)

                                                
                                    
x
+
TestPreload (276.77s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-422343 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0717 18:13:21.396522   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-422343 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m14.117110607s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-422343 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-422343 image pull gcr.io/k8s-minikube/busybox: (2.841848804s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-422343
E0717 18:15:24.837706   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 18:15:41.791839   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-422343: exit status 82 (2m0.453032206s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-422343"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-422343 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-07-17 18:17:21.36875683 +0000 UTC m=+3956.097952787
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-422343 -n test-preload-422343
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-422343 -n test-preload-422343: exit status 3 (18.44390369s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:17:39.809322   55702 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.35:22: connect: no route to host
	E0717 18:17:39.809347   55702 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.35:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-422343" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-422343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-422343
--- FAIL: TestPreload (276.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (342.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-778511 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-778511 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m29.058840933s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-778511] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-778511" primary control-plane node in "kubernetes-upgrade-778511" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:23:09.797005   62943 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:23:09.797114   62943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:23:09.797119   62943 out.go:304] Setting ErrFile to fd 2...
	I0717 18:23:09.797123   62943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:23:09.797644   62943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:23:09.798592   62943 out.go:298] Setting JSON to false
	I0717 18:23:09.799802   62943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7533,"bootTime":1721233057,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:23:09.799884   62943 start.go:139] virtualization: kvm guest
	I0717 18:23:09.801902   62943 out.go:177] * [kubernetes-upgrade-778511] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:23:09.803229   62943 notify.go:220] Checking for updates...
	I0717 18:23:09.803248   62943 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:23:09.804570   62943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:23:09.805907   62943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:23:09.807201   62943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:23:09.808363   62943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:23:09.809473   62943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:23:09.811097   62943 config.go:182] Loaded profile config "NoKubernetes-456922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0717 18:23:09.811199   62943 config.go:182] Loaded profile config "cert-expiration-907422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:23:09.811310   62943 config.go:182] Loaded profile config "running-upgrade-475983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0717 18:23:09.811405   62943 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:23:09.847508   62943 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 18:23:09.848824   62943 start.go:297] selected driver: kvm2
	I0717 18:23:09.848836   62943 start.go:901] validating driver "kvm2" against <nil>
	I0717 18:23:09.848847   62943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:23:09.849575   62943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:23:09.849661   62943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:23:09.868089   62943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:23:09.868130   62943 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 18:23:09.868353   62943 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 18:23:09.868415   62943 cni.go:84] Creating CNI manager for ""
	I0717 18:23:09.868431   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:23:09.868443   62943 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 18:23:09.868502   62943 start.go:340] cluster config:
	{Name:kubernetes-upgrade-778511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-778511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:23:09.868612   62943 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:23:09.870328   62943 out.go:177] * Starting "kubernetes-upgrade-778511" primary control-plane node in "kubernetes-upgrade-778511" cluster
	I0717 18:23:09.871592   62943 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:23:09.871621   62943 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 18:23:09.871640   62943 cache.go:56] Caching tarball of preloaded images
	I0717 18:23:09.871729   62943 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:23:09.871743   62943 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 18:23:09.871840   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/config.json ...
	I0717 18:23:09.871865   62943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/config.json: {Name:mk2a628ffbee9106514a8c0981df6b21ec45362f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:09.872039   62943 start.go:360] acquireMachinesLock for kubernetes-upgrade-778511: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:23:11.409410   62943 start.go:364] duration metric: took 1.537307701s to acquireMachinesLock for "kubernetes-upgrade-778511"
	I0717 18:23:11.409493   62943 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-778511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-778511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:23:11.409620   62943 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 18:23:11.411908   62943 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 18:23:11.412140   62943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:23:11.412195   62943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:23:11.429941   62943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46565
	I0717 18:23:11.430408   62943 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:23:11.430988   62943 main.go:141] libmachine: Using API Version  1
	I0717 18:23:11.431024   62943 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:23:11.431361   62943 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:23:11.431564   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetMachineName
	I0717 18:23:11.431731   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .DriverName
	I0717 18:23:11.431872   62943 start.go:159] libmachine.API.Create for "kubernetes-upgrade-778511" (driver="kvm2")
	I0717 18:23:11.431897   62943 client.go:168] LocalClient.Create starting
	I0717 18:23:11.431921   62943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 18:23:11.431950   62943 main.go:141] libmachine: Decoding PEM data...
	I0717 18:23:11.431963   62943 main.go:141] libmachine: Parsing certificate...
	I0717 18:23:11.432008   62943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 18:23:11.432024   62943 main.go:141] libmachine: Decoding PEM data...
	I0717 18:23:11.432035   62943 main.go:141] libmachine: Parsing certificate...
	I0717 18:23:11.432055   62943 main.go:141] libmachine: Running pre-create checks...
	I0717 18:23:11.432064   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .PreCreateCheck
	I0717 18:23:11.432461   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetConfigRaw
	I0717 18:23:11.432917   62943 main.go:141] libmachine: Creating machine...
	I0717 18:23:11.432933   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .Create
	I0717 18:23:11.433076   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Creating KVM machine...
	I0717 18:23:11.434275   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found existing default KVM network
	I0717 18:23:11.435398   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:11.435226   62967 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:5e:32:a0} reservation:<nil>}
	I0717 18:23:11.436602   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:11.436502   62967 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:0d:ab:ac} reservation:<nil>}
	I0717 18:23:11.437687   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:11.437610   62967 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:88:f9:73} reservation:<nil>}
	I0717 18:23:11.439946   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:11.439848   62967 network.go:209] skipping subnet 192.168.72.0/24 that is reserved: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 18:23:11.441265   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:11.441192   62967 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015510}
	I0717 18:23:11.441286   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | created network xml: 
	I0717 18:23:11.441294   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | <network>
	I0717 18:23:11.441300   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG |   <name>mk-kubernetes-upgrade-778511</name>
	I0717 18:23:11.441311   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG |   <dns enable='no'/>
	I0717 18:23:11.441315   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG |   
	I0717 18:23:11.441323   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0717 18:23:11.441328   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG |     <dhcp>
	I0717 18:23:11.441334   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0717 18:23:11.441339   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG |     </dhcp>
	I0717 18:23:11.441354   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG |   </ip>
	I0717 18:23:11.441358   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG |   
	I0717 18:23:11.441363   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | </network>
	I0717 18:23:11.441371   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | 
	I0717 18:23:11.446837   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | trying to create private KVM network mk-kubernetes-upgrade-778511 192.168.83.0/24...
	I0717 18:23:11.517465   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511 ...
	I0717 18:23:11.517500   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 18:23:11.517512   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | private KVM network mk-kubernetes-upgrade-778511 192.168.83.0/24 created
	I0717 18:23:11.517538   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:11.517398   62967 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:23:11.517611   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 18:23:11.744186   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:11.744063   62967 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511/id_rsa...
	I0717 18:23:12.012522   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:12.012365   62967 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511/kubernetes-upgrade-778511.rawdisk...
	I0717 18:23:12.012545   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Writing magic tar header
	I0717 18:23:12.012557   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Writing SSH key tar header
	I0717 18:23:12.012573   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:12.012488   62967 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511 ...
	I0717 18:23:12.012584   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511
	I0717 18:23:12.012603   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511 (perms=drwx------)
	I0717 18:23:12.012625   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:23:12.012678   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 18:23:12.012703   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:23:12.012829   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 18:23:12.012850   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 18:23:12.012871   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:23:12.012885   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:23:12.012899   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Checking permissions on dir: /home
	I0717 18:23:12.012918   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 18:23:12.012931   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Skipping /home - not owner
	I0717 18:23:12.012965   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:23:12.012977   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:23:12.013008   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Creating domain...
	I0717 18:23:12.013877   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) define libvirt domain using xml: 
	I0717 18:23:12.013890   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) <domain type='kvm'>
	I0717 18:23:12.013897   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)   <name>kubernetes-upgrade-778511</name>
	I0717 18:23:12.013915   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)   <memory unit='MiB'>2200</memory>
	I0717 18:23:12.013941   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)   <vcpu>2</vcpu>
	I0717 18:23:12.013959   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)   <features>
	I0717 18:23:12.013982   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <acpi/>
	I0717 18:23:12.014004   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <apic/>
	I0717 18:23:12.014018   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <pae/>
	I0717 18:23:12.014035   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     
	I0717 18:23:12.014048   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)   </features>
	I0717 18:23:12.014060   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)   <cpu mode='host-passthrough'>
	I0717 18:23:12.014072   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)   
	I0717 18:23:12.014090   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)   </cpu>
	I0717 18:23:12.014109   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)   <os>
	I0717 18:23:12.014125   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <type>hvm</type>
	I0717 18:23:12.014138   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <boot dev='cdrom'/>
	I0717 18:23:12.014149   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <boot dev='hd'/>
	I0717 18:23:12.014166   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <bootmenu enable='no'/>
	I0717 18:23:12.014177   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)   </os>
	I0717 18:23:12.014187   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)   <devices>
	I0717 18:23:12.014210   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <disk type='file' device='cdrom'>
	I0717 18:23:12.014231   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511/boot2docker.iso'/>
	I0717 18:23:12.014243   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)       <target dev='hdc' bus='scsi'/>
	I0717 18:23:12.014253   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)       <readonly/>
	I0717 18:23:12.014263   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     </disk>
	I0717 18:23:12.014278   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <disk type='file' device='disk'>
	I0717 18:23:12.014294   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:23:12.014330   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511/kubernetes-upgrade-778511.rawdisk'/>
	I0717 18:23:12.014345   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)       <target dev='hda' bus='virtio'/>
	I0717 18:23:12.014354   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     </disk>
	I0717 18:23:12.014391   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <interface type='network'>
	I0717 18:23:12.014417   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)       <source network='mk-kubernetes-upgrade-778511'/>
	I0717 18:23:12.014430   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)       <model type='virtio'/>
	I0717 18:23:12.014448   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     </interface>
	I0717 18:23:12.014469   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <interface type='network'>
	I0717 18:23:12.014484   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)       <source network='default'/>
	I0717 18:23:12.014492   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)       <model type='virtio'/>
	I0717 18:23:12.014501   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     </interface>
	I0717 18:23:12.014512   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <serial type='pty'>
	I0717 18:23:12.014520   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)       <target port='0'/>
	I0717 18:23:12.014528   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     </serial>
	I0717 18:23:12.014548   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <console type='pty'>
	I0717 18:23:12.014569   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)       <target type='serial' port='0'/>
	I0717 18:23:12.014582   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     </console>
	I0717 18:23:12.014592   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     <rng model='virtio'>
	I0717 18:23:12.014602   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)       <backend model='random'>/dev/random</backend>
	I0717 18:23:12.014612   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     </rng>
	I0717 18:23:12.014623   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     
	I0717 18:23:12.014641   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)     
	I0717 18:23:12.014654   62943 main.go:141] libmachine: (kubernetes-upgrade-778511)   </devices>
	I0717 18:23:12.014662   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) </domain>
	I0717 18:23:12.014675   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) 
	I0717 18:23:12.018781   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:e8:22:23 in network default
	I0717 18:23:12.019390   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Ensuring networks are active...
	I0717 18:23:12.019416   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:12.020126   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Ensuring network default is active
	I0717 18:23:12.020480   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Ensuring network mk-kubernetes-upgrade-778511 is active
	I0717 18:23:12.021022   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Getting domain xml...
	I0717 18:23:12.021776   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Creating domain...
	I0717 18:23:13.285359   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Waiting to get IP...
	I0717 18:23:13.286235   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:13.286658   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:13.286700   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:13.286640   62967 retry.go:31] will retry after 234.497725ms: waiting for machine to come up
	I0717 18:23:13.523074   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:13.523580   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:13.523617   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:13.523517   62967 retry.go:31] will retry after 243.164248ms: waiting for machine to come up
	I0717 18:23:13.767919   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:13.814023   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:13.814058   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:13.813964   62967 retry.go:31] will retry after 449.400693ms: waiting for machine to come up
	I0717 18:23:14.264617   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:14.265041   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:14.265068   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:14.265008   62967 retry.go:31] will retry after 371.656079ms: waiting for machine to come up
	I0717 18:23:14.638627   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:14.639850   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:14.639899   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:14.639784   62967 retry.go:31] will retry after 523.289811ms: waiting for machine to come up
	I0717 18:23:15.164504   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:15.164988   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:15.165017   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:15.164922   62967 retry.go:31] will retry after 921.453902ms: waiting for machine to come up
	I0717 18:23:16.087795   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:16.088154   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:16.088175   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:16.088113   62967 retry.go:31] will retry after 900.578869ms: waiting for machine to come up
	I0717 18:23:16.990701   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:16.991316   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:16.991347   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:16.991259   62967 retry.go:31] will retry after 1.167596311s: waiting for machine to come up
	I0717 18:23:18.160564   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:18.160971   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:18.161000   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:18.160913   62967 retry.go:31] will retry after 1.27116447s: waiting for machine to come up
	I0717 18:23:19.434201   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:19.434673   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:19.434698   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:19.434624   62967 retry.go:31] will retry after 1.529018477s: waiting for machine to come up
	I0717 18:23:20.965589   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:20.966247   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:20.966272   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:20.966174   62967 retry.go:31] will retry after 2.549520542s: waiting for machine to come up
	I0717 18:23:23.516796   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:23.517322   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:23.517340   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:23.517276   62967 retry.go:31] will retry after 3.433715514s: waiting for machine to come up
	I0717 18:23:26.952829   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:26.953322   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:26.953347   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:26.953272   62967 retry.go:31] will retry after 3.143046638s: waiting for machine to come up
	I0717 18:23:30.100524   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:30.101062   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find current IP address of domain kubernetes-upgrade-778511 in network mk-kubernetes-upgrade-778511
	I0717 18:23:30.101085   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | I0717 18:23:30.101001   62967 retry.go:31] will retry after 3.547698069s: waiting for machine to come up
	I0717 18:23:33.650006   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:33.650463   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Found IP for machine: 192.168.83.153
	I0717 18:23:33.650481   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has current primary IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:33.650487   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Reserving static IP address...
	I0717 18:23:33.650828   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-778511", mac: "52:54:00:5e:65:e4", ip: "192.168.83.153"} in network mk-kubernetes-upgrade-778511
	I0717 18:23:33.723991   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Reserved static IP address: 192.168.83.153
	I0717 18:23:33.724020   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Getting to WaitForSSH function...
	I0717 18:23:33.724031   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Waiting for SSH to be available...
	I0717 18:23:33.726989   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:33.727615   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:33.727647   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:33.727760   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Using SSH client type: external
	I0717 18:23:33.727797   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511/id_rsa (-rw-------)
	I0717 18:23:33.727835   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.153 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:23:33.727850   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | About to run SSH command:
	I0717 18:23:33.727866   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | exit 0
	I0717 18:23:33.848641   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | SSH cmd err, output: <nil>: 
	I0717 18:23:33.848854   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) KVM machine creation complete!
	I0717 18:23:33.849157   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetConfigRaw
	I0717 18:23:33.849706   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .DriverName
	I0717 18:23:33.849880   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .DriverName
	I0717 18:23:33.850022   62943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:23:33.850038   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetState
	I0717 18:23:33.851458   62943 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:23:33.851472   62943 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:23:33.851477   62943 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:23:33.851484   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHHostname
	I0717 18:23:33.853829   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:33.854237   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:33.854280   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:33.854380   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHPort
	I0717 18:23:33.854574   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:33.854782   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:33.854933   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHUsername
	I0717 18:23:33.855094   62943 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:33.855311   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.83.153 22 <nil> <nil>}
	I0717 18:23:33.855322   62943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:23:33.952085   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:23:33.952117   62943 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:23:33.952125   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHHostname
	I0717 18:23:33.954822   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:33.955211   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:33.955241   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:33.955430   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHPort
	I0717 18:23:33.955662   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:33.955837   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:33.955976   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHUsername
	I0717 18:23:33.956197   62943 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:33.956381   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.83.153 22 <nil> <nil>}
	I0717 18:23:33.956394   62943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:23:34.053556   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:23:34.053637   62943 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:23:34.053651   62943 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:23:34.053666   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetMachineName
	I0717 18:23:34.053918   62943 buildroot.go:166] provisioning hostname "kubernetes-upgrade-778511"
	I0717 18:23:34.053950   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetMachineName
	I0717 18:23:34.054173   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHHostname
	I0717 18:23:34.056800   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.057212   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:34.057242   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.057363   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHPort
	I0717 18:23:34.057551   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:34.057710   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:34.057870   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHUsername
	I0717 18:23:34.058027   62943 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:34.058194   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.83.153 22 <nil> <nil>}
	I0717 18:23:34.058205   62943 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-778511 && echo "kubernetes-upgrade-778511" | sudo tee /etc/hostname
	I0717 18:23:34.166251   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-778511
	
	I0717 18:23:34.166284   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHHostname
	I0717 18:23:34.169089   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.169493   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:34.169527   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.169674   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHPort
	I0717 18:23:34.169873   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:34.170028   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:34.170131   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHUsername
	I0717 18:23:34.170256   62943 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:34.170437   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.83.153 22 <nil> <nil>}
	I0717 18:23:34.170459   62943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-778511' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-778511/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-778511' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:23:34.272800   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:23:34.272827   62943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:23:34.272865   62943 buildroot.go:174] setting up certificates
	I0717 18:23:34.272875   62943 provision.go:84] configureAuth start
	I0717 18:23:34.272890   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetMachineName
	I0717 18:23:34.273167   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetIP
	I0717 18:23:34.275818   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.276101   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:34.276126   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.276336   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHHostname
	I0717 18:23:34.278351   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.278712   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:34.278740   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.278892   62943 provision.go:143] copyHostCerts
	I0717 18:23:34.278964   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:23:34.278993   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:23:34.279068   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:23:34.279214   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:23:34.279231   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:23:34.279271   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:23:34.279370   62943 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:23:34.279380   62943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:23:34.279402   62943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:23:34.279474   62943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-778511 san=[127.0.0.1 192.168.83.153 kubernetes-upgrade-778511 localhost minikube]
	I0717 18:23:34.389782   62943 provision.go:177] copyRemoteCerts
	I0717 18:23:34.389837   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:23:34.389861   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHHostname
	I0717 18:23:34.392622   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.392909   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:34.392968   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.393130   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHPort
	I0717 18:23:34.393331   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:34.393499   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHUsername
	I0717 18:23:34.393704   62943 sshutil.go:53] new ssh client: &{IP:192.168.83.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511/id_rsa Username:docker}
	I0717 18:23:34.470260   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:23:34.491752   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 18:23:34.512691   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:23:34.533777   62943 provision.go:87] duration metric: took 260.883399ms to configureAuth
	I0717 18:23:34.533813   62943 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:23:34.533975   62943 config.go:182] Loaded profile config "kubernetes-upgrade-778511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 18:23:34.534074   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHHostname
	I0717 18:23:34.536817   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.537205   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:34.537242   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.537383   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHPort
	I0717 18:23:34.537565   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:34.537729   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:34.537848   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHUsername
	I0717 18:23:34.537987   62943 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:34.538180   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.83.153 22 <nil> <nil>}
	I0717 18:23:34.538196   62943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:23:34.786805   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:23:34.786838   62943 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:23:34.786849   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetURL
	I0717 18:23:34.788230   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | Using libvirt version 6000000
	I0717 18:23:34.790510   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.790853   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:34.790879   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.791036   62943 main.go:141] libmachine: Docker is up and running!
	I0717 18:23:34.791050   62943 main.go:141] libmachine: Reticulating splines...
	I0717 18:23:34.791058   62943 client.go:171] duration metric: took 23.359153591s to LocalClient.Create
	I0717 18:23:34.791085   62943 start.go:167] duration metric: took 23.35921282s to libmachine.API.Create "kubernetes-upgrade-778511"
	I0717 18:23:34.791098   62943 start.go:293] postStartSetup for "kubernetes-upgrade-778511" (driver="kvm2")
	I0717 18:23:34.791114   62943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:23:34.791136   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .DriverName
	I0717 18:23:34.791346   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:23:34.791384   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHHostname
	I0717 18:23:34.793799   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.794183   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:34.794203   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.794408   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHPort
	I0717 18:23:34.794584   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:34.794770   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHUsername
	I0717 18:23:34.794945   62943 sshutil.go:53] new ssh client: &{IP:192.168.83.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511/id_rsa Username:docker}
	I0717 18:23:34.870570   62943 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:23:34.874235   62943 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:23:34.874256   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:23:34.874306   62943 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:23:34.874384   62943 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:23:34.874466   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:23:34.883205   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:23:34.904309   62943 start.go:296] duration metric: took 113.178802ms for postStartSetup
	I0717 18:23:34.904359   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetConfigRaw
	I0717 18:23:34.904933   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetIP
	I0717 18:23:34.907812   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.908153   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:34.908182   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.908399   62943 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/config.json ...
	I0717 18:23:34.908590   62943 start.go:128] duration metric: took 23.498956052s to createHost
	I0717 18:23:34.908615   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHHostname
	I0717 18:23:34.910978   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.911307   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:34.911331   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:34.911439   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHPort
	I0717 18:23:34.911620   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:34.911780   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:34.911941   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHUsername
	I0717 18:23:34.912123   62943 main.go:141] libmachine: Using SSH client type: native
	I0717 18:23:34.912297   62943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.83.153 22 <nil> <nil>}
	I0717 18:23:34.912310   62943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 18:23:35.009237   62943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721240614.976632238
	
	I0717 18:23:35.009262   62943 fix.go:216] guest clock: 1721240614.976632238
	I0717 18:23:35.009269   62943 fix.go:229] Guest: 2024-07-17 18:23:34.976632238 +0000 UTC Remote: 2024-07-17 18:23:34.908604028 +0000 UTC m=+25.143614870 (delta=68.02821ms)
	I0717 18:23:35.009286   62943 fix.go:200] guest clock delta is within tolerance: 68.02821ms
	I0717 18:23:35.009292   62943 start.go:83] releasing machines lock for "kubernetes-upgrade-778511", held for 23.599847859s
	I0717 18:23:35.009315   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .DriverName
	I0717 18:23:35.009598   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetIP
	I0717 18:23:35.012488   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:35.012908   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:35.012939   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:35.013116   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .DriverName
	I0717 18:23:35.013681   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .DriverName
	I0717 18:23:35.013914   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .DriverName
	I0717 18:23:35.014049   62943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:23:35.014108   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHHostname
	I0717 18:23:35.014190   62943 ssh_runner.go:195] Run: cat /version.json
	I0717 18:23:35.014215   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHHostname
	I0717 18:23:35.016871   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:35.017216   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:35.017244   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:35.017276   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:35.017377   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHPort
	I0717 18:23:35.017558   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:35.017794   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:35.017822   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:35.017797   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHUsername
	I0717 18:23:35.018025   62943 sshutil.go:53] new ssh client: &{IP:192.168.83.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511/id_rsa Username:docker}
	I0717 18:23:35.018049   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHPort
	I0717 18:23:35.018197   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:23:35.018361   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHUsername
	I0717 18:23:35.018544   62943 sshutil.go:53] new ssh client: &{IP:192.168.83.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511/id_rsa Username:docker}
	I0717 18:23:35.137987   62943 ssh_runner.go:195] Run: systemctl --version
	I0717 18:23:35.143963   62943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:23:35.308590   62943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:23:35.314228   62943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:23:35.314284   62943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:23:35.332025   62943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:23:35.332046   62943 start.go:495] detecting cgroup driver to use...
	I0717 18:23:35.332122   62943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:23:35.349441   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:23:35.363345   62943 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:23:35.363411   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:23:35.376854   62943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:23:35.389814   62943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:23:35.517000   62943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:23:35.672075   62943 docker.go:233] disabling docker service ...
	I0717 18:23:35.672153   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:23:35.685928   62943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:23:35.698635   62943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:23:35.822490   62943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:23:35.947221   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:23:35.965225   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:23:35.982676   62943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 18:23:35.982730   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:35.992019   62943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:23:35.992080   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:36.001785   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:36.011253   62943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:23:36.020890   62943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:23:36.030585   62943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:23:36.039481   62943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:23:36.039540   62943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:23:36.051382   62943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:23:36.060517   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:23:36.197424   62943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:23:36.335746   62943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:23:36.335818   62943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:23:36.340067   62943 start.go:563] Will wait 60s for crictl version
	I0717 18:23:36.340127   62943 ssh_runner.go:195] Run: which crictl
	I0717 18:23:36.343537   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:23:36.380121   62943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:23:36.380214   62943 ssh_runner.go:195] Run: crio --version
	I0717 18:23:36.410134   62943 ssh_runner.go:195] Run: crio --version
	I0717 18:23:36.442164   62943 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 18:23:36.443371   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetIP
	I0717 18:23:36.446372   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:36.446771   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:23:25 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:23:36.446799   62943 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:23:36.446970   62943 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0717 18:23:36.450980   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:23:36.463495   62943 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-778511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-778511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.153 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:23:36.463639   62943 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:23:36.463701   62943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:23:36.501979   62943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:23:36.502044   62943 ssh_runner.go:195] Run: which lz4
	I0717 18:23:36.506249   62943 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 18:23:36.510249   62943 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:23:36.510288   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 18:23:38.012469   62943 crio.go:462] duration metric: took 1.506245514s to copy over tarball
	I0717 18:23:38.012532   62943 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:23:40.549474   62943 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.53691102s)
	I0717 18:23:40.549509   62943 crio.go:469] duration metric: took 2.537014236s to extract the tarball
	I0717 18:23:40.549519   62943 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:23:40.593069   62943 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:23:40.640672   62943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:23:40.640701   62943 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:23:40.640805   62943 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:23:40.640829   62943 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 18:23:40.640827   62943 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:23:40.640806   62943 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:23:40.640888   62943 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:23:40.640839   62943 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 18:23:40.640804   62943 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:23:40.641037   62943 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:23:40.642506   62943 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 18:23:40.642551   62943 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:23:40.642579   62943 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:23:40.642672   62943 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:23:40.642701   62943 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:23:40.642719   62943 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:23:40.642716   62943 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:23:40.642897   62943 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 18:23:40.919079   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:23:40.926505   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 18:23:40.930929   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 18:23:40.939733   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:23:40.965975   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 18:23:40.974215   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:23:40.989416   62943 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 18:23:40.989478   62943 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:23:40.989527   62943 ssh_runner.go:195] Run: which crictl
	I0717 18:23:40.998800   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:23:41.008391   62943 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 18:23:41.008435   62943 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 18:23:41.008487   62943 ssh_runner.go:195] Run: which crictl
	I0717 18:23:41.042818   62943 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 18:23:41.042861   62943 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:23:41.042908   62943 ssh_runner.go:195] Run: which crictl
	I0717 18:23:41.060056   62943 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 18:23:41.060101   62943 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:23:41.060146   62943 ssh_runner.go:195] Run: which crictl
	I0717 18:23:41.085842   62943 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 18:23:41.085886   62943 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 18:23:41.085933   62943 ssh_runner.go:195] Run: which crictl
	I0717 18:23:41.092110   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:23:41.092227   62943 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 18:23:41.092263   62943 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:23:41.092309   62943 ssh_runner.go:195] Run: which crictl
	I0717 18:23:41.105243   62943 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 18:23:41.105283   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 18:23:41.105289   62943 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:23:41.105319   62943 ssh_runner.go:195] Run: which crictl
	I0717 18:23:41.105351   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 18:23:41.105364   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:23:41.105366   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 18:23:41.169616   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 18:23:41.169663   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:23:41.203356   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 18:23:41.203417   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 18:23:41.203462   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 18:23:41.203466   62943 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:23:41.209348   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 18:23:41.239029   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 18:23:41.244864   62943 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 18:23:41.482202   62943 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:23:41.623389   62943 cache_images.go:92] duration metric: took 982.665868ms to LoadCachedImages
	W0717 18:23:41.623472   62943 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0717 18:23:41.623490   62943 kubeadm.go:934] updating node { 192.168.83.153 8443 v1.20.0 crio true true} ...
	I0717 18:23:41.623633   62943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-778511 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-778511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:23:41.623718   62943 ssh_runner.go:195] Run: crio config
	I0717 18:23:41.676536   62943 cni.go:84] Creating CNI manager for ""
	I0717 18:23:41.676561   62943 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:23:41.676574   62943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:23:41.676630   62943 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.153 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-778511 NodeName:kubernetes-upgrade-778511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 18:23:41.676860   62943 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-778511"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.153
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.153"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:23:41.677063   62943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 18:23:41.688614   62943 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:23:41.688683   62943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:23:41.698237   62943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0717 18:23:41.716514   62943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:23:41.732750   62943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0717 18:23:41.748614   62943 ssh_runner.go:195] Run: grep 192.168.83.153	control-plane.minikube.internal$ /etc/hosts
	I0717 18:23:41.752473   62943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.153	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:23:41.763834   62943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:23:41.889587   62943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:23:41.908988   62943 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511 for IP: 192.168.83.153
	I0717 18:23:41.909014   62943 certs.go:194] generating shared ca certs ...
	I0717 18:23:41.909035   62943 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:41.909208   62943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:23:41.909265   62943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:23:41.909278   62943 certs.go:256] generating profile certs ...
	I0717 18:23:41.909370   62943 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/client.key
	I0717 18:23:41.909389   62943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/client.crt with IP's: []
	I0717 18:23:42.148802   62943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/client.crt ...
	I0717 18:23:42.148829   62943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/client.crt: {Name:mk5522ff450169e5db78e7ca1c0aacd321c5c1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:42.149007   62943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/client.key ...
	I0717 18:23:42.149022   62943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/client.key: {Name:mk001bcc3a443a70fc6c3614e49b7a5cceb78f5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:42.149105   62943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.key.79af623a
	I0717 18:23:42.149124   62943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.crt.79af623a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.153]
	I0717 18:23:42.382031   62943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.crt.79af623a ...
	I0717 18:23:42.382061   62943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.crt.79af623a: {Name:mk36ba5d2ddefedf74f5c0953d8427d8be9064a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:42.382211   62943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.key.79af623a ...
	I0717 18:23:42.382227   62943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.key.79af623a: {Name:mka36fe4b8924fd371cff04e818882ede83c3c72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:42.382295   62943 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.crt.79af623a -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.crt
	I0717 18:23:42.382377   62943 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.key.79af623a -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.key
	I0717 18:23:42.382432   62943 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/proxy-client.key
	I0717 18:23:42.382446   62943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/proxy-client.crt with IP's: []
	I0717 18:23:42.613720   62943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/proxy-client.crt ...
	I0717 18:23:42.613747   62943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/proxy-client.crt: {Name:mk382be58d9af1539e25ff24490eb035a5590c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:42.613890   62943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/proxy-client.key ...
	I0717 18:23:42.613901   62943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/proxy-client.key: {Name:mke31f0943f03cb7df9f97246263e6b28d84f35f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:23:42.614068   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:23:42.614109   62943 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:23:42.614123   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:23:42.614156   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:23:42.614215   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:23:42.614261   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:23:42.614315   62943 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:23:42.614874   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:23:42.640033   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:23:42.663462   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:23:42.688779   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:23:42.722104   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 18:23:42.749544   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 18:23:42.783264   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:23:42.813503   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:23:42.836117   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:23:42.860735   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:23:42.885311   62943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:23:42.914164   62943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:23:42.930544   62943 ssh_runner.go:195] Run: openssl version
	I0717 18:23:42.936609   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:23:42.947494   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:23:42.952156   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:23:42.952218   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:23:42.957954   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:23:42.968835   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:23:42.979515   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:23:42.984353   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:23:42.984407   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:23:42.991657   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:23:43.005995   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:23:43.016513   62943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:23:43.021250   62943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:23:43.021323   62943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:23:43.026770   62943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:23:43.037787   62943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:23:43.041878   62943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:23:43.041941   62943 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-778511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-778511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.153 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:23:43.042021   62943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:23:43.042088   62943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:23:43.083333   62943 cri.go:89] found id: ""
	I0717 18:23:43.083421   62943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:23:43.093740   62943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:23:43.103391   62943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:23:43.112789   62943 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:23:43.112808   62943 kubeadm.go:157] found existing configuration files:
	
	I0717 18:23:43.112850   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:23:43.121697   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:23:43.121759   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:23:43.130899   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:23:43.139931   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:23:43.139993   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:23:43.149648   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:23:43.158569   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:23:43.158658   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:23:43.168115   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:23:43.180385   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:23:43.180457   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:23:43.193402   62943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:23:43.320604   62943 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:23:43.320725   62943 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:23:43.459673   62943 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:23:43.459905   62943 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:23:43.460038   62943 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:23:43.667111   62943 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:23:43.781337   62943 out.go:204]   - Generating certificates and keys ...
	I0717 18:23:43.781486   62943 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:23:43.781623   62943 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:23:43.979829   62943 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:23:44.127752   62943 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:23:44.269228   62943 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:23:44.438879   62943 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 18:23:44.515783   62943 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 18:23:44.516147   62943 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-778511 localhost] and IPs [192.168.83.153 127.0.0.1 ::1]
	I0717 18:23:44.660479   62943 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 18:23:44.660654   62943 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-778511 localhost] and IPs [192.168.83.153 127.0.0.1 ::1]
	I0717 18:23:44.759696   62943 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:23:45.021489   62943 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:23:45.218533   62943 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 18:23:45.218863   62943 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:23:45.313279   62943 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:23:45.477747   62943 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:23:45.631796   62943 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:23:45.860485   62943 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:23:45.880528   62943 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:23:45.883029   62943 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:23:45.883101   62943 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:23:46.028052   62943 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:23:46.029444   62943 out.go:204]   - Booting up control plane ...
	I0717 18:23:46.029584   62943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:23:46.042098   62943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:23:46.043190   62943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:23:46.043913   62943 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:23:46.048435   62943 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:24:26.038055   62943 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:24:26.038774   62943 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:24:26.039044   62943 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:24:31.038985   62943 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:24:31.039249   62943 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:24:41.038891   62943 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:24:41.039148   62943 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:25:01.039736   62943 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:25:01.040013   62943 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:25:41.041659   62943 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:25:41.041977   62943 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:25:41.041995   62943 kubeadm.go:310] 
	I0717 18:25:41.042049   62943 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:25:41.042102   62943 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:25:41.042110   62943 kubeadm.go:310] 
	I0717 18:25:41.042156   62943 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:25:41.042215   62943 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:25:41.042378   62943 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:25:41.042389   62943 kubeadm.go:310] 
	I0717 18:25:41.042538   62943 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:25:41.042587   62943 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:25:41.042634   62943 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:25:41.042643   62943 kubeadm.go:310] 
	I0717 18:25:41.042794   62943 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:25:41.042907   62943 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:25:41.042917   62943 kubeadm.go:310] 
	I0717 18:25:41.043072   62943 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:25:41.043199   62943 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:25:41.043312   62943 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:25:41.043416   62943 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:25:41.043425   62943 kubeadm.go:310] 
	I0717 18:25:41.044058   62943 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:25:41.044184   62943 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:25:41.044273   62943 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0717 18:25:41.044426   62943 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-778511 localhost] and IPs [192.168.83.153 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-778511 localhost] and IPs [192.168.83.153 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-778511 localhost] and IPs [192.168.83.153 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-778511 localhost] and IPs [192.168.83.153 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 18:25:41.044489   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:25:41.551870   62943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:25:41.571254   62943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:25:41.584881   62943 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:25:41.584905   62943 kubeadm.go:157] found existing configuration files:
	
	I0717 18:25:41.584964   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:25:41.596312   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:25:41.596393   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:25:41.608099   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:25:41.617042   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:25:41.617098   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:25:41.628673   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:25:41.637397   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:25:41.637454   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:25:41.649410   62943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:25:41.661805   62943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:25:41.661860   62943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:25:41.672783   62943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:25:41.744152   62943 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:25:41.744256   62943 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:25:41.897705   62943 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:25:41.897877   62943 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:25:41.898028   62943 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:25:42.085110   62943 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:25:42.193656   62943 out.go:204]   - Generating certificates and keys ...
	I0717 18:25:42.193773   62943 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:25:42.193858   62943 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:25:42.193963   62943 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:25:42.194043   62943 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:25:42.194139   62943 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:25:42.194263   62943 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:25:42.194392   62943 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:25:42.194467   62943 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:25:42.194579   62943 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:25:42.194735   62943 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:25:42.194784   62943 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:25:42.194872   62943 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:25:42.204269   62943 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:25:42.258195   62943 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:25:42.692681   62943 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:25:42.867935   62943 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:25:42.884639   62943 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:25:42.886702   62943 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:25:42.886779   62943 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:25:43.069463   62943 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:25:43.072097   62943 out.go:204]   - Booting up control plane ...
	I0717 18:25:43.072230   62943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:25:43.080680   62943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:25:43.083065   62943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:25:43.083796   62943 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:25:43.085853   62943 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:26:23.083310   62943 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:26:23.083611   62943 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:26:23.083783   62943 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:26:28.083785   62943 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:26:28.083992   62943 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:26:38.084040   62943 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:26:38.084317   62943 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:26:58.084304   62943 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:26:58.084571   62943 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:27:38.086442   62943 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:27:38.086723   62943 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:27:38.086734   62943 kubeadm.go:310] 
	I0717 18:27:38.086795   62943 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:27:38.086854   62943 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:27:38.086863   62943 kubeadm.go:310] 
	I0717 18:27:38.086911   62943 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:27:38.086971   62943 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:27:38.087147   62943 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:27:38.087169   62943 kubeadm.go:310] 
	I0717 18:27:38.087320   62943 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:27:38.087357   62943 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:27:38.087387   62943 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:27:38.087394   62943 kubeadm.go:310] 
	I0717 18:27:38.087494   62943 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:27:38.087620   62943 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:27:38.087638   62943 kubeadm.go:310] 
	I0717 18:27:38.087781   62943 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:27:38.087918   62943 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:27:38.088017   62943 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:27:38.088103   62943 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:27:38.088115   62943 kubeadm.go:310] 
	I0717 18:27:38.088740   62943 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:27:38.088842   62943 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:27:38.088929   62943 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:27:38.089107   62943 kubeadm.go:394] duration metric: took 3m55.047170337s to StartCluster
	I0717 18:27:38.089158   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:27:38.089210   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:27:38.150720   62943 cri.go:89] found id: ""
	I0717 18:27:38.150741   62943 logs.go:276] 0 containers: []
	W0717 18:27:38.150748   62943 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:27:38.150753   62943 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:27:38.150801   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:27:38.187561   62943 cri.go:89] found id: ""
	I0717 18:27:38.187595   62943 logs.go:276] 0 containers: []
	W0717 18:27:38.187606   62943 logs.go:278] No container was found matching "etcd"
	I0717 18:27:38.187612   62943 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:27:38.187675   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:27:38.226608   62943 cri.go:89] found id: ""
	I0717 18:27:38.226636   62943 logs.go:276] 0 containers: []
	W0717 18:27:38.226647   62943 logs.go:278] No container was found matching "coredns"
	I0717 18:27:38.226654   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:27:38.226727   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:27:38.269645   62943 cri.go:89] found id: ""
	I0717 18:27:38.269664   62943 logs.go:276] 0 containers: []
	W0717 18:27:38.269672   62943 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:27:38.269679   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:27:38.269725   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:27:38.317346   62943 cri.go:89] found id: ""
	I0717 18:27:38.317370   62943 logs.go:276] 0 containers: []
	W0717 18:27:38.317380   62943 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:27:38.317397   62943 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:27:38.317442   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:27:38.357515   62943 cri.go:89] found id: ""
	I0717 18:27:38.357640   62943 logs.go:276] 0 containers: []
	W0717 18:27:38.357653   62943 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:27:38.357666   62943 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:27:38.357731   62943 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:27:38.396820   62943 cri.go:89] found id: ""
	I0717 18:27:38.396849   62943 logs.go:276] 0 containers: []
	W0717 18:27:38.396864   62943 logs.go:278] No container was found matching "kindnet"
	I0717 18:27:38.396875   62943 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:27:38.396894   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:27:38.543763   62943 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:27:38.543783   62943 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:27:38.543795   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:27:38.663531   62943 logs.go:123] Gathering logs for container status ...
	I0717 18:27:38.663566   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:27:38.712670   62943 logs.go:123] Gathering logs for kubelet ...
	I0717 18:27:38.712699   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:27:38.791811   62943 logs.go:123] Gathering logs for dmesg ...
	I0717 18:27:38.791891   62943 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0717 18:27:38.808452   62943 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 18:27:38.808491   62943 out.go:239] * 
	* 
	W0717 18:27:38.808541   62943 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:27:38.808557   62943 out.go:239] * 
	* 
	W0717 18:27:38.809499   62943 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:27:38.813091   62943 out.go:177] 
	W0717 18:27:38.814439   62943 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:27:38.814499   62943 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 18:27:38.814579   62943 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 18:27:38.815975   62943 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-778511 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-778511
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-778511: (1.406140044s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-778511 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-778511 status --format={{.Host}}: exit status 7 (77.438252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-778511 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-778511 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.213817767s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-778511 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-778511 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-778511 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (80.741744ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-778511] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-778511
	    minikube start -p kubernetes-upgrade-778511 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7785112 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-778511 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-778511 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-778511 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (33.528383205s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-17 18:28:49.250483108 +0000 UTC m=+4643.979679102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-778511 -n kubernetes-upgrade-778511
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-778511 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-778511 logs -n 25: (1.59129265s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-235476 sudo                                | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo                                | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo                                | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo cat                            | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo cat                            | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo                                | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo                                | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo cat                            | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo docker                         | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo                                | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo                                | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo cat                            | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo cat                            | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo                                | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo                                | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo                                | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo cat                            | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo cat                            | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo                                | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo                                | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo                                | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo find                           | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p calico-235476 sudo crio                           | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p calico-235476                                     | calico-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC | 17 Jul 24 18:28 UTC |
	| start   | -p bridge-235476 --memory=3072                       | bridge-235476 | jenkins | v1.33.1 | 17 Jul 24 18:28 UTC |                     |
	|         | --alsologtostderr --wait=true                        |               |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |               |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |               |         |         |                     |                     |
	|         | --container-runtime=crio                             |               |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:28:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:28:36.192367   70641 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:28:36.192496   70641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:36.192510   70641 out.go:304] Setting ErrFile to fd 2...
	I0717 18:28:36.192515   70641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:28:36.192715   70641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:28:36.193368   70641 out.go:298] Setting JSON to false
	I0717 18:28:36.194567   70641 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7859,"bootTime":1721233057,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:28:36.194632   70641 start.go:139] virtualization: kvm guest
	I0717 18:28:36.281198   70641 out.go:177] * [bridge-235476] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:28:36.421923   70641 notify.go:220] Checking for updates...
	I0717 18:28:36.554412   70641 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:28:36.681231   70641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:28:36.707760   70641 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:28:36.787341   70641 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:28:36.788697   70641 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:28:36.790132   70641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:28:36.792321   70641 config.go:182] Loaded profile config "custom-flannel-235476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:28:36.792455   70641 config.go:182] Loaded profile config "kubernetes-upgrade-778511": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:28:36.792602   70641 config.go:182] Loaded profile config "pause-371172": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:28:36.792693   70641 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:28:36.832153   70641 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 18:28:36.833590   70641 start.go:297] selected driver: kvm2
	I0717 18:28:36.833610   70641 start.go:901] validating driver "kvm2" against <nil>
	I0717 18:28:36.833620   70641 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:28:36.834416   70641 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:28:36.834500   70641 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:28:36.851739   70641 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:28:36.851799   70641 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 18:28:36.852129   70641 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:28:36.852173   70641 cni.go:84] Creating CNI manager for "bridge"
	I0717 18:28:36.852181   70641 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 18:28:36.852258   70641 start.go:340] cluster config:
	{Name:bridge-235476 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:bridge-235476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:28:36.852423   70641 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:28:36.854272   70641 out.go:177] * Starting "bridge-235476" primary control-plane node in "bridge-235476" cluster
	I0717 18:28:36.855665   70641 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:28:36.855705   70641 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:28:36.855728   70641 cache.go:56] Caching tarball of preloaded images
	I0717 18:28:36.855820   70641 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:28:36.855835   70641 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:28:36.855956   70641 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/config.json ...
	I0717 18:28:36.855982   70641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/config.json: {Name:mk820f54dc6fd6260001be6d87adfe1765726011 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:28:36.856154   70641 start.go:360] acquireMachinesLock for bridge-235476: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:28:36.856196   70641 start.go:364] duration metric: took 24.307µs to acquireMachinesLock for "bridge-235476"
	I0717 18:28:36.856222   70641 start.go:93] Provisioning new machine with config: &{Name:bridge-235476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:bridge-235476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:28:36.856309   70641 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 18:28:33.748565   68679 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.508047186s)
	I0717 18:28:33.748608   68679 crio.go:469] duration metric: took 2.508161148s to extract the tarball
	I0717 18:28:33.748618   68679 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:28:33.789246   68679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:28:33.831916   68679 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:28:33.831942   68679 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:28:33.831951   68679 kubeadm.go:934] updating node { 192.168.61.20 8443 v1.30.2 crio true true} ...
	I0717 18:28:33.832066   68679 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-235476 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.20
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:custom-flannel-235476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0717 18:28:33.832142   68679 ssh_runner.go:195] Run: crio config
	I0717 18:28:33.888096   68679 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0717 18:28:33.888136   68679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:28:33.888165   68679 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.20 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-235476 NodeName:custom-flannel-235476 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.20 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:28:33.888342   68679 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.20
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-235476"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.20
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.20"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:28:33.888412   68679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:28:33.898615   68679 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:28:33.898685   68679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:28:33.908963   68679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0717 18:28:33.926701   68679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:28:33.942209   68679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0717 18:28:33.959194   68679 ssh_runner.go:195] Run: grep 192.168.61.20	control-plane.minikube.internal$ /etc/hosts
	I0717 18:28:33.962813   68679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.20	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:28:33.974965   68679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:28:34.112038   68679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:28:34.130792   68679 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476 for IP: 192.168.61.20
	I0717 18:28:34.130821   68679 certs.go:194] generating shared ca certs ...
	I0717 18:28:34.130834   68679 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:28:34.130999   68679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:28:34.131061   68679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:28:34.131068   68679 certs.go:256] generating profile certs ...
	I0717 18:28:34.131126   68679 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.key
	I0717 18:28:34.131144   68679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt with IP's: []
	I0717 18:28:34.260272   68679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt ...
	I0717 18:28:34.260294   68679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: {Name:mk5851d8d729c36cd0c64956fc6e6b21e28cacb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:28:34.260444   68679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.key ...
	I0717 18:28:34.260455   68679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.key: {Name:mkf9847e125df8da87c0280292aeb0bb2dd8612a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:28:34.260527   68679 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/apiserver.key.cd0d863a
	I0717 18:28:34.260541   68679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/apiserver.crt.cd0d863a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.20]
	I0717 18:28:34.411961   68679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/apiserver.crt.cd0d863a ...
	I0717 18:28:34.412001   68679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/apiserver.crt.cd0d863a: {Name:mkc7f861bb9f6729e9ddd466170954b94a103a83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:28:34.412215   68679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/apiserver.key.cd0d863a ...
	I0717 18:28:34.412240   68679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/apiserver.key.cd0d863a: {Name:mk3d9df0c6b5b2b450c169a0c651e2488711ba57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:28:34.412372   68679 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/apiserver.crt.cd0d863a -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/apiserver.crt
	I0717 18:28:34.412471   68679 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/apiserver.key.cd0d863a -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/apiserver.key
	I0717 18:28:34.412557   68679 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/proxy-client.key
	I0717 18:28:34.412575   68679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/proxy-client.crt with IP's: []
	I0717 18:28:34.511966   68679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/proxy-client.crt ...
	I0717 18:28:34.511996   68679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/proxy-client.crt: {Name:mk78e59a7d6de5855b43cadead1b6fe873ef125b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:28:34.512180   68679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/proxy-client.key ...
	I0717 18:28:34.512199   68679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/proxy-client.key: {Name:mk328351a038ed65b7d94d500d30d7564f1ac5fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:28:34.512434   68679 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:28:34.512489   68679 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:28:34.512507   68679 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:28:34.512541   68679 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:28:34.512576   68679 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:28:34.512605   68679 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:28:34.512664   68679 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:28:34.513448   68679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:28:34.550879   68679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:28:34.580288   68679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:28:34.607060   68679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:28:34.637815   68679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 18:28:34.669815   68679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:28:34.695690   68679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:28:34.720869   68679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:28:34.753437   68679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:28:34.784538   68679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:28:34.816536   68679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:28:34.868540   68679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:28:34.897694   68679 ssh_runner.go:195] Run: openssl version
	I0717 18:28:34.903840   68679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:28:34.914425   68679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:28:34.918507   68679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:28:34.918592   68679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:28:34.924382   68679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:28:34.936496   68679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:28:34.948513   68679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:28:34.953787   68679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:28:34.953839   68679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:28:34.960578   68679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:28:34.971782   68679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:28:34.983380   68679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:28:34.988174   68679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:28:34.988227   68679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:28:34.994146   68679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:28:35.007253   68679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:28:35.011595   68679 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:28:35.011645   68679 kubeadm.go:392] StartCluster: {Name:custom-flannel-235476 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.2 ClusterName:custom-flannel-235476 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.61.20 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:28:35.011723   68679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:28:35.011784   68679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:28:35.083931   68679 cri.go:89] found id: ""
	I0717 18:28:35.084011   68679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:28:35.095631   68679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:28:35.106485   68679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:28:35.118925   68679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:28:35.118949   68679 kubeadm.go:157] found existing configuration files:
	
	I0717 18:28:35.118998   68679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:28:35.128791   68679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:28:35.128844   68679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:28:35.140091   68679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:28:35.150415   68679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:28:35.150471   68679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:28:35.161406   68679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:28:35.171530   68679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:28:35.171594   68679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:28:35.181674   68679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:28:35.191410   68679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:28:35.191478   68679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:28:35.200507   68679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:28:35.399550   68679 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:28:35.825684   69148 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721240915.815517534
	
	I0717 18:28:35.825713   69148 fix.go:216] guest clock: 1721240915.815517534
	I0717 18:28:35.825722   69148 fix.go:229] Guest: 2024-07-17 18:28:35.815517534 +0000 UTC Remote: 2024-07-17 18:28:35.705605397 +0000 UTC m=+19.980326450 (delta=109.912137ms)
	I0717 18:28:35.825745   69148 fix.go:200] guest clock delta is within tolerance: 109.912137ms
	I0717 18:28:35.825752   69148 start.go:83] releasing machines lock for "kubernetes-upgrade-778511", held for 7.450847211s
	I0717 18:28:35.825777   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .DriverName
	I0717 18:28:35.826049   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetIP
	I0717 18:28:35.828965   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:28:35.829344   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:27:51 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:28:35.829378   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:28:35.829593   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .DriverName
	I0717 18:28:35.830077   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .DriverName
	I0717 18:28:35.830256   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .DriverName
	I0717 18:28:35.830367   69148 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:28:35.830408   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHHostname
	I0717 18:28:35.830466   69148 ssh_runner.go:195] Run: cat /version.json
	I0717 18:28:35.830491   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHHostname
	I0717 18:28:35.833088   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:28:35.833362   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:28:35.833499   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:27:51 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:28:35.833526   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:28:35.833691   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHPort
	I0717 18:28:35.833849   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:27:51 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:28:35.833872   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:28:35.833901   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:28:35.834009   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHPort
	I0717 18:28:35.834093   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHUsername
	I0717 18:28:35.834171   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHKeyPath
	I0717 18:28:35.834241   69148 sshutil.go:53] new ssh client: &{IP:192.168.83.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511/id_rsa Username:docker}
	I0717 18:28:35.834281   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetSSHUsername
	I0717 18:28:35.834384   69148 sshutil.go:53] new ssh client: &{IP:192.168.83.153 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/kubernetes-upgrade-778511/id_rsa Username:docker}
	I0717 18:28:35.941879   69148 ssh_runner.go:195] Run: systemctl --version
	I0717 18:28:35.948424   69148 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:28:36.101479   69148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:28:36.106975   69148 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:28:36.107029   69148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:28:36.117026   69148 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 18:28:36.117051   69148 start.go:495] detecting cgroup driver to use...
	I0717 18:28:36.117119   69148 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:28:36.135219   69148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:28:36.150197   69148 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:28:36.150260   69148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:28:36.166661   69148 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:28:36.181336   69148 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:28:36.324975   69148 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:28:36.460549   69148 docker.go:233] disabling docker service ...
	I0717 18:28:36.460644   69148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:28:36.475854   69148 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:28:36.488591   69148 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:28:36.629245   69148 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:28:36.826384   69148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:28:36.887523   69148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:28:37.010024   69148 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 18:28:37.010079   69148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:28:37.058189   69148 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:28:37.058265   69148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:28:37.132434   69148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:28:37.289868   69148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:28:37.359265   69148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:28:37.412814   69148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:28:37.523572   69148 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:28:37.581226   69148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:28:37.650793   69148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:28:37.699317   69148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:28:37.740986   69148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:28:38.157159   69148 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:28:38.947387   69148 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:28:38.947464   69148 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:28:38.953689   69148 start.go:563] Will wait 60s for crictl version
	I0717 18:28:38.953748   69148 ssh_runner.go:195] Run: which crictl
	I0717 18:28:38.965720   69148 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:28:39.026664   69148 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:28:39.026762   69148 ssh_runner.go:195] Run: crio --version
	I0717 18:28:39.057082   69148 ssh_runner.go:195] Run: crio --version
	I0717 18:28:39.087826   69148 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 18:28:39.089096   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) Calling .GetIP
	I0717 18:28:39.092232   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:28:39.092666   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:65:e4", ip: ""} in network mk-kubernetes-upgrade-778511: {Iface:virbr3 ExpiryTime:2024-07-17 19:27:51 +0000 UTC Type:0 Mac:52:54:00:5e:65:e4 Iaid: IPaddr:192.168.83.153 Prefix:24 Hostname:kubernetes-upgrade-778511 Clientid:01:52:54:00:5e:65:e4}
	I0717 18:28:39.092695   69148 main.go:141] libmachine: (kubernetes-upgrade-778511) DBG | domain kubernetes-upgrade-778511 has defined IP address 192.168.83.153 and MAC address 52:54:00:5e:65:e4 in network mk-kubernetes-upgrade-778511
	I0717 18:28:39.092874   69148 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0717 18:28:39.097125   69148 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-778511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-778511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.153 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:28:39.097271   69148 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 18:28:39.097332   69148 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:28:39.140174   69148 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:28:39.140196   69148 crio.go:433] Images already preloaded, skipping extraction
	I0717 18:28:39.140253   69148 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:28:39.175296   69148 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:28:39.175336   69148 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:28:39.175345   69148 kubeadm.go:934] updating node { 192.168.83.153 8443 v1.31.0-beta.0 crio true true} ...
	I0717 18:28:39.175505   69148 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-778511 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.153
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-778511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:28:39.175602   69148 ssh_runner.go:195] Run: crio config
	I0717 18:28:39.227019   69148 cni.go:84] Creating CNI manager for ""
	I0717 18:28:39.227043   69148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:28:39.227056   69148 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:28:39.227075   69148 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.153 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-778511 NodeName:kubernetes-upgrade-778511 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.153"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.153 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:28:39.227238   69148 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.153
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-778511"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.153
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.153"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:28:39.227299   69148 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 18:28:39.237941   69148 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:28:39.238007   69148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:28:39.247011   69148 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0717 18:28:39.262299   69148 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 18:28:39.279001   69148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0717 18:28:39.296507   69148 ssh_runner.go:195] Run: grep 192.168.83.153	control-plane.minikube.internal$ /etc/hosts
	I0717 18:28:39.300460   69148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:28:39.456210   69148 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:28:39.473043   69148 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511 for IP: 192.168.83.153
	I0717 18:28:39.473068   69148 certs.go:194] generating shared ca certs ...
	I0717 18:28:39.473098   69148 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:28:39.473256   69148 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:28:39.473340   69148 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:28:39.473353   69148 certs.go:256] generating profile certs ...
	I0717 18:28:39.473476   69148 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/client.key
	I0717 18:28:39.473541   69148 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.key.79af623a
	I0717 18:28:39.473592   69148 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/proxy-client.key
	I0717 18:28:39.473764   69148 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:28:39.473802   69148 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:28:39.473816   69148 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:28:39.473861   69148 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:28:39.473893   69148 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:28:39.473928   69148 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:28:39.473991   69148 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:28:39.474586   69148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:28:39.503658   69148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:28:39.536233   69148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:28:39.563489   69148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:28:39.589685   69148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0717 18:28:39.617233   69148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 18:28:39.643698   69148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:28:39.669254   69148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kubernetes-upgrade-778511/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:28:39.697833   69148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:28:39.727273   69148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:28:39.758566   69148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:28:39.789036   69148 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:28:39.808769   69148 ssh_runner.go:195] Run: openssl version
	I0717 18:28:39.815210   69148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:28:39.834339   69148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:28:39.839854   69148 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:28:39.839920   69148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:28:39.845471   69148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:28:39.857573   69148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:28:39.895365   69148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:28:39.918212   69148 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:28:39.918285   69148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:28:39.940924   69148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:28:39.983976   69148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:28:40.020643   69148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:28:40.111237   69148 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:28:40.111318   69148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:28:40.182889   69148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:28:40.324825   69148 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:28:40.401347   69148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:28:40.451420   69148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:28:40.498636   69148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:28:40.509601   69148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:28:40.518835   69148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:28:40.529133   69148 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:28:40.544326   69148 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-778511 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-778511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.153 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:28:40.544449   69148 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:28:40.544562   69148 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:28:40.623728   69148 cri.go:89] found id: "93a941ba3652077f31ae6f9b03e6d2654dd5a38fc5834b82dbbbe40dc1a63fbd"
	I0717 18:28:40.623753   69148 cri.go:89] found id: "2c7501778e942c1c19fc2e5c13cfc32b454d768f6a68836d52b3f83626923caa"
	I0717 18:28:40.623759   69148 cri.go:89] found id: "641ae7b729fdabe7cec074c478b8665cea9d5e8f9d9c9aa409456033fde46fa9"
	I0717 18:28:40.623773   69148 cri.go:89] found id: "bac24e0c48a31efb33c2f7bb15d71055a8ec2ed12757c1620e32472cc8d3b739"
	I0717 18:28:40.623777   69148 cri.go:89] found id: "d24f32482e8886b266183fa33e3f2bc06e6db518b42b99d931912ab8018538f6"
	I0717 18:28:40.623782   69148 cri.go:89] found id: "d264268183e0a5541601f3847b52f73067d90a04128b338cbe909bbfd0f807fb"
	I0717 18:28:40.623785   69148 cri.go:89] found id: "b54d73e40f2895061f0c5f14b321c294e124ae666e5c152ad5cdb5aae2a3c600"
	I0717 18:28:40.623789   69148 cri.go:89] found id: "816bc072557cae60207babba95f4ffa1ec8c354be7e5880bee6bc517129c5646"
	I0717 18:28:40.623793   69148 cri.go:89] found id: ""
	I0717 18:28:40.623848   69148 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 17 18:28:49 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:49.948618494Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721240929948596615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ae446d7-a88b-4afc-8975-61ffe99aa7a9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:49 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:49.949111258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c2badb9-6334-4893-8cf4-30edbd11fc51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:49 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:49.949178594Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c2badb9-6334-4893-8cf4-30edbd11fc51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:49 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:49.949544649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88139c21cb6c865428affb6e5c8b404779f95dcd9785d1f01c8e7eff4122dbef,PodSandboxId:ff3136021e0796aba08faa4639c29fc51ba513f9a2cd751bdb62b7fa218934fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240926800105487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-56c8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0ea73b-43c7-4df4-8668-8b92cb8fb3b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be667d1057dc62a19bb193fd62212ea6fa0b5b1a8a45e36a7c5c05ce0fc3e81e,PodSandboxId:75d06ccf7a79b17cc30072925639f01a7874fa8b8909737282ce4fcd2476f72b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721240926804723282,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brcks,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: a52e9661-8ce7-4f9b-93c9-37e2da416871,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90c627bb7e44e60dfff38c2672dbd17c9d2477b1e1bd6d1ade740c20c4c7ae3,PodSandboxId:4e4308033271b51bd148ab94d8503d7096098a963c0ff3b879b9dd15371a3958,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721240926818455840,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 02a7f177-5f7c-4601-b308-1e9713ce66ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a676c8f61f5de0b7983b218b883a05db3aaaafd3a2973cb5f2c5d5478c2b09a4,PodSandboxId:f252c346aadecde8852a4a3006170c5280d745eebe94a8b4b1ac0c7440e15156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721240923054902594,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3c60da9d22ce0ce59ea2fbc9
2cde7e,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc44122be18028f959c0bfd5a058ce0ea0f2f0b33bd06cf9c6b8812dd160dda,PodSandboxId:ecc3973bccdd6dc2239486d16df7eb926c596cf440b46205e35e2be9b0df62c9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721240923009253497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5154dd7879a7c5868642b18cb707b5be
,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2df78f390c6ebdd6460720fe3bd8a85bf98d429b3dcd7d0a10a7e8d0d9332667,PodSandboxId:59658c60c5037fa41815d1cfb53e49ec8aedb1b7c5ae711a19ce9caa0c37db65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721240923006389931,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc6770cbe
59d9f8ece456799804b1ec,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c303945ebed05093bcc09bf65a52a94c3b79936cb07c023d9cc2533acb0e1b,PodSandboxId:61b6e1528922b1f763c953d81dbb39c8e620cbd6f1d6cbc6b66be72182e5cba2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721240922979819404,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a40c3db3fc7cc
d881a50a29afdba42,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed18d2697e7858e2d2b567628db59478fb3fb5ccfaf2275e67fc300de8813abe,PodSandboxId:69ae9f3d42cd82eccab98da278e1404441a7aae68c451f08a4b9642dac0c0770,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240920873524314,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4trqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5b773e-7aab-4565-96d7-b40017a3d127,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac24e0c48a31efb33c2f7bb15d71055a8ec2ed12757c1620e32472cc8d3b739,PodSandboxId:c3a9d7ade324b83275d5ac13cb40b23e3a8b4098fc7f31b435f47290b7500f5a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721240917427793525,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02a7f177-5f7c-4601-b308-1e9713ce66ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a941ba3652077f31ae6f9b03e6d2654dd5a38fc5834b82dbbbe40dc1a63fbd,PodSandboxId:ca317fee7e45f929a7a49fc086359dbed106dd91b9f0314a3905acc81512bb7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240917940332246,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-56c8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0ea73b-43c7-4df4-8668-8b92cb8fb3b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7501778e942c1c19fc2e5c13cfc32b454d768f6a68836d52b3f83626923caa,PodSandboxId:a8584a243d6cf16c4adcd1a11c964d705b0b26b1620d40910e4043e4a51202b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721240917526399529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3c60da9d22ce0ce59ea2fbc92cde7e,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641ae7b729fdabe7cec074c478b8665cea9d5e8f9d9c9aa409456033fde46fa9,PodSandboxId:410de13dc1efc998ffdc1354f3c3a154ec93407f30dc2e403f654a39b45e55af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721240917453859023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5154dd7879a7c5868642b18cb707b5be,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d264268183e0a5541601f3847b52f73067d90a04128b338cbe909bbfd0f807fb,PodSandboxId:a1bfff998f9e19637d8b1bfe8a66063684ab97c30030ec679e3330fc74d16191,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c658136
9906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721240917270725020,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brcks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52e9661-8ce7-4f9b-93c9-37e2da416871,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24f32482e8886b266183fa33e3f2bc06e6db518b42b99d931912ab8018538f6,PodSandboxId:73f126226d0965c43272f537778f2f819517d5be0e48acbae1ba1b11f7fe9881,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddf
bced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721240917318970630,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a40c3db3fc7ccd881a50a29afdba42,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b54d73e40f2895061f0c5f14b321c294e124ae666e5c152ad5cdb5aae2a3c600,PodSandboxId:88f390c5b0fe92ff75002108506d2b228381c5b0eca6e79d9389cc53cbb9eca0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d1
7692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721240917174267696,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc6770cbe59d9f8ece456799804b1ec,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816bc072557cae60207babba95f4ffa1ec8c354be7e5880bee6bc517129c5646,PodSandboxId:c1a6d5756fa6995bdab0be57824c80de66c235592397da93a14ae0bc299d3383,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240901036845764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4trqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5b773e-7aab-4565-96d7-b40017a3d127,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c2badb9-6334-4893-8cf4-30edbd11fc51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.002500641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e7e4b81-bb24-497a-bafa-ff0bc36f9784 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.002616414Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e7e4b81-bb24-497a-bafa-ff0bc36f9784 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.004026820Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ae19f26-d9ae-4472-a4e0-9a8290c9374a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.004721693Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721240930004688703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ae19f26-d9ae-4472-a4e0-9a8290c9374a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.005290986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37deed65-35a8-455e-b785-cb3581062fa8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.005366644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37deed65-35a8-455e-b785-cb3581062fa8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.007897792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88139c21cb6c865428affb6e5c8b404779f95dcd9785d1f01c8e7eff4122dbef,PodSandboxId:ff3136021e0796aba08faa4639c29fc51ba513f9a2cd751bdb62b7fa218934fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240926800105487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-56c8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0ea73b-43c7-4df4-8668-8b92cb8fb3b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be667d1057dc62a19bb193fd62212ea6fa0b5b1a8a45e36a7c5c05ce0fc3e81e,PodSandboxId:75d06ccf7a79b17cc30072925639f01a7874fa8b8909737282ce4fcd2476f72b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721240926804723282,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brcks,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: a52e9661-8ce7-4f9b-93c9-37e2da416871,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90c627bb7e44e60dfff38c2672dbd17c9d2477b1e1bd6d1ade740c20c4c7ae3,PodSandboxId:4e4308033271b51bd148ab94d8503d7096098a963c0ff3b879b9dd15371a3958,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721240926818455840,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 02a7f177-5f7c-4601-b308-1e9713ce66ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a676c8f61f5de0b7983b218b883a05db3aaaafd3a2973cb5f2c5d5478c2b09a4,PodSandboxId:f252c346aadecde8852a4a3006170c5280d745eebe94a8b4b1ac0c7440e15156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721240923054902594,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3c60da9d22ce0ce59ea2fbc9
2cde7e,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc44122be18028f959c0bfd5a058ce0ea0f2f0b33bd06cf9c6b8812dd160dda,PodSandboxId:ecc3973bccdd6dc2239486d16df7eb926c596cf440b46205e35e2be9b0df62c9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721240923009253497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5154dd7879a7c5868642b18cb707b5be
,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2df78f390c6ebdd6460720fe3bd8a85bf98d429b3dcd7d0a10a7e8d0d9332667,PodSandboxId:59658c60c5037fa41815d1cfb53e49ec8aedb1b7c5ae711a19ce9caa0c37db65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721240923006389931,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc6770cbe
59d9f8ece456799804b1ec,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c303945ebed05093bcc09bf65a52a94c3b79936cb07c023d9cc2533acb0e1b,PodSandboxId:61b6e1528922b1f763c953d81dbb39c8e620cbd6f1d6cbc6b66be72182e5cba2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721240922979819404,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a40c3db3fc7cc
d881a50a29afdba42,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed18d2697e7858e2d2b567628db59478fb3fb5ccfaf2275e67fc300de8813abe,PodSandboxId:69ae9f3d42cd82eccab98da278e1404441a7aae68c451f08a4b9642dac0c0770,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240920873524314,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4trqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5b773e-7aab-4565-96d7-b40017a3d127,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac24e0c48a31efb33c2f7bb15d71055a8ec2ed12757c1620e32472cc8d3b739,PodSandboxId:c3a9d7ade324b83275d5ac13cb40b23e3a8b4098fc7f31b435f47290b7500f5a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721240917427793525,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02a7f177-5f7c-4601-b308-1e9713ce66ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a941ba3652077f31ae6f9b03e6d2654dd5a38fc5834b82dbbbe40dc1a63fbd,PodSandboxId:ca317fee7e45f929a7a49fc086359dbed106dd91b9f0314a3905acc81512bb7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240917940332246,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-56c8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0ea73b-43c7-4df4-8668-8b92cb8fb3b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7501778e942c1c19fc2e5c13cfc32b454d768f6a68836d52b3f83626923caa,PodSandboxId:a8584a243d6cf16c4adcd1a11c964d705b0b26b1620d40910e4043e4a51202b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721240917526399529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3c60da9d22ce0ce59ea2fbc92cde7e,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641ae7b729fdabe7cec074c478b8665cea9d5e8f9d9c9aa409456033fde46fa9,PodSandboxId:410de13dc1efc998ffdc1354f3c3a154ec93407f30dc2e403f654a39b45e55af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721240917453859023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5154dd7879a7c5868642b18cb707b5be,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d264268183e0a5541601f3847b52f73067d90a04128b338cbe909bbfd0f807fb,PodSandboxId:a1bfff998f9e19637d8b1bfe8a66063684ab97c30030ec679e3330fc74d16191,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c658136
9906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721240917270725020,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brcks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52e9661-8ce7-4f9b-93c9-37e2da416871,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24f32482e8886b266183fa33e3f2bc06e6db518b42b99d931912ab8018538f6,PodSandboxId:73f126226d0965c43272f537778f2f819517d5be0e48acbae1ba1b11f7fe9881,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddf
bced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721240917318970630,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a40c3db3fc7ccd881a50a29afdba42,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b54d73e40f2895061f0c5f14b321c294e124ae666e5c152ad5cdb5aae2a3c600,PodSandboxId:88f390c5b0fe92ff75002108506d2b228381c5b0eca6e79d9389cc53cbb9eca0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d1
7692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721240917174267696,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc6770cbe59d9f8ece456799804b1ec,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816bc072557cae60207babba95f4ffa1ec8c354be7e5880bee6bc517129c5646,PodSandboxId:c1a6d5756fa6995bdab0be57824c80de66c235592397da93a14ae0bc299d3383,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240901036845764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4trqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5b773e-7aab-4565-96d7-b40017a3d127,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37deed65-35a8-455e-b785-cb3581062fa8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.055310605Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c617ad2e-7339-4c49-adf4-e9606a50f2a0 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.055403006Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c617ad2e-7339-4c49-adf4-e9606a50f2a0 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.056686140Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ce2544c6-7c12-4f1b-ba94-a61baa16ada6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.057302751Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721240930057276951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ce2544c6-7c12-4f1b-ba94-a61baa16ada6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.057947873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d65c4a6e-e06a-4546-99c4-969b8f37efb9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.058029810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d65c4a6e-e06a-4546-99c4-969b8f37efb9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.058989485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88139c21cb6c865428affb6e5c8b404779f95dcd9785d1f01c8e7eff4122dbef,PodSandboxId:ff3136021e0796aba08faa4639c29fc51ba513f9a2cd751bdb62b7fa218934fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240926800105487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-56c8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0ea73b-43c7-4df4-8668-8b92cb8fb3b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be667d1057dc62a19bb193fd62212ea6fa0b5b1a8a45e36a7c5c05ce0fc3e81e,PodSandboxId:75d06ccf7a79b17cc30072925639f01a7874fa8b8909737282ce4fcd2476f72b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721240926804723282,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brcks,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: a52e9661-8ce7-4f9b-93c9-37e2da416871,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90c627bb7e44e60dfff38c2672dbd17c9d2477b1e1bd6d1ade740c20c4c7ae3,PodSandboxId:4e4308033271b51bd148ab94d8503d7096098a963c0ff3b879b9dd15371a3958,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721240926818455840,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 02a7f177-5f7c-4601-b308-1e9713ce66ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a676c8f61f5de0b7983b218b883a05db3aaaafd3a2973cb5f2c5d5478c2b09a4,PodSandboxId:f252c346aadecde8852a4a3006170c5280d745eebe94a8b4b1ac0c7440e15156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721240923054902594,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3c60da9d22ce0ce59ea2fbc9
2cde7e,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc44122be18028f959c0bfd5a058ce0ea0f2f0b33bd06cf9c6b8812dd160dda,PodSandboxId:ecc3973bccdd6dc2239486d16df7eb926c596cf440b46205e35e2be9b0df62c9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721240923009253497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5154dd7879a7c5868642b18cb707b5be
,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2df78f390c6ebdd6460720fe3bd8a85bf98d429b3dcd7d0a10a7e8d0d9332667,PodSandboxId:59658c60c5037fa41815d1cfb53e49ec8aedb1b7c5ae711a19ce9caa0c37db65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721240923006389931,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc6770cbe
59d9f8ece456799804b1ec,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c303945ebed05093bcc09bf65a52a94c3b79936cb07c023d9cc2533acb0e1b,PodSandboxId:61b6e1528922b1f763c953d81dbb39c8e620cbd6f1d6cbc6b66be72182e5cba2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721240922979819404,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a40c3db3fc7cc
d881a50a29afdba42,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed18d2697e7858e2d2b567628db59478fb3fb5ccfaf2275e67fc300de8813abe,PodSandboxId:69ae9f3d42cd82eccab98da278e1404441a7aae68c451f08a4b9642dac0c0770,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240920873524314,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4trqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5b773e-7aab-4565-96d7-b40017a3d127,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac24e0c48a31efb33c2f7bb15d71055a8ec2ed12757c1620e32472cc8d3b739,PodSandboxId:c3a9d7ade324b83275d5ac13cb40b23e3a8b4098fc7f31b435f47290b7500f5a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721240917427793525,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02a7f177-5f7c-4601-b308-1e9713ce66ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a941ba3652077f31ae6f9b03e6d2654dd5a38fc5834b82dbbbe40dc1a63fbd,PodSandboxId:ca317fee7e45f929a7a49fc086359dbed106dd91b9f0314a3905acc81512bb7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240917940332246,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-56c8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0ea73b-43c7-4df4-8668-8b92cb8fb3b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7501778e942c1c19fc2e5c13cfc32b454d768f6a68836d52b3f83626923caa,PodSandboxId:a8584a243d6cf16c4adcd1a11c964d705b0b26b1620d40910e4043e4a51202b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721240917526399529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3c60da9d22ce0ce59ea2fbc92cde7e,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641ae7b729fdabe7cec074c478b8665cea9d5e8f9d9c9aa409456033fde46fa9,PodSandboxId:410de13dc1efc998ffdc1354f3c3a154ec93407f30dc2e403f654a39b45e55af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721240917453859023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5154dd7879a7c5868642b18cb707b5be,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d264268183e0a5541601f3847b52f73067d90a04128b338cbe909bbfd0f807fb,PodSandboxId:a1bfff998f9e19637d8b1bfe8a66063684ab97c30030ec679e3330fc74d16191,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c658136
9906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721240917270725020,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brcks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52e9661-8ce7-4f9b-93c9-37e2da416871,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24f32482e8886b266183fa33e3f2bc06e6db518b42b99d931912ab8018538f6,PodSandboxId:73f126226d0965c43272f537778f2f819517d5be0e48acbae1ba1b11f7fe9881,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddf
bced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721240917318970630,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a40c3db3fc7ccd881a50a29afdba42,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b54d73e40f2895061f0c5f14b321c294e124ae666e5c152ad5cdb5aae2a3c600,PodSandboxId:88f390c5b0fe92ff75002108506d2b228381c5b0eca6e79d9389cc53cbb9eca0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d1
7692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721240917174267696,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc6770cbe59d9f8ece456799804b1ec,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816bc072557cae60207babba95f4ffa1ec8c354be7e5880bee6bc517129c5646,PodSandboxId:c1a6d5756fa6995bdab0be57824c80de66c235592397da93a14ae0bc299d3383,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240901036845764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4trqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5b773e-7aab-4565-96d7-b40017a3d127,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d65c4a6e-e06a-4546-99c4-969b8f37efb9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.094279777Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ef30767-37fb-4a90-8aa8-7f5c3027c040 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.094419008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ef30767-37fb-4a90-8aa8-7f5c3027c040 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.095772389Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b2178f7-36eb-409c-a7e5-95d551bc2665 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.096353185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721240930096331495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b2178f7-36eb-409c-a7e5-95d551bc2665 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.096957073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38995c01-d648-4bc7-9df6-7574bf09d254 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.097009981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38995c01-d648-4bc7-9df6-7574bf09d254 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:28:50 kubernetes-upgrade-778511 crio[3063]: time="2024-07-17 18:28:50.097488931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88139c21cb6c865428affb6e5c8b404779f95dcd9785d1f01c8e7eff4122dbef,PodSandboxId:ff3136021e0796aba08faa4639c29fc51ba513f9a2cd751bdb62b7fa218934fe,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240926800105487,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-56c8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0ea73b-43c7-4df4-8668-8b92cb8fb3b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be667d1057dc62a19bb193fd62212ea6fa0b5b1a8a45e36a7c5c05ce0fc3e81e,PodSandboxId:75d06ccf7a79b17cc30072925639f01a7874fa8b8909737282ce4fcd2476f72b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1721240926804723282,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brcks,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: a52e9661-8ce7-4f9b-93c9-37e2da416871,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d90c627bb7e44e60dfff38c2672dbd17c9d2477b1e1bd6d1ade740c20c4c7ae3,PodSandboxId:4e4308033271b51bd148ab94d8503d7096098a963c0ff3b879b9dd15371a3958,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721240926818455840,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 02a7f177-5f7c-4601-b308-1e9713ce66ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a676c8f61f5de0b7983b218b883a05db3aaaafd3a2973cb5f2c5d5478c2b09a4,PodSandboxId:f252c346aadecde8852a4a3006170c5280d745eebe94a8b4b1ac0c7440e15156,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721240923054902594,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3c60da9d22ce0ce59ea2fbc9
2cde7e,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdc44122be18028f959c0bfd5a058ce0ea0f2f0b33bd06cf9c6b8812dd160dda,PodSandboxId:ecc3973bccdd6dc2239486d16df7eb926c596cf440b46205e35e2be9b0df62c9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721240923009253497,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5154dd7879a7c5868642b18cb707b5be
,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2df78f390c6ebdd6460720fe3bd8a85bf98d429b3dcd7d0a10a7e8d0d9332667,PodSandboxId:59658c60c5037fa41815d1cfb53e49ec8aedb1b7c5ae711a19ce9caa0c37db65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721240923006389931,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc6770cbe
59d9f8ece456799804b1ec,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c303945ebed05093bcc09bf65a52a94c3b79936cb07c023d9cc2533acb0e1b,PodSandboxId:61b6e1528922b1f763c953d81dbb39c8e620cbd6f1d6cbc6b66be72182e5cba2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721240922979819404,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a40c3db3fc7cc
d881a50a29afdba42,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed18d2697e7858e2d2b567628db59478fb3fb5ccfaf2275e67fc300de8813abe,PodSandboxId:69ae9f3d42cd82eccab98da278e1404441a7aae68c451f08a4b9642dac0c0770,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721240920873524314,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4trqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5b773e-7aab-4565-96d7-b40017a3d127,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bac24e0c48a31efb33c2f7bb15d71055a8ec2ed12757c1620e32472cc8d3b739,PodSandboxId:c3a9d7ade324b83275d5ac13cb40b23e3a8b4098fc7f31b435f47290b7500f5a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721240917427793525,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02a7f177-5f7c-4601-b308-1e9713ce66ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93a941ba3652077f31ae6f9b03e6d2654dd5a38fc5834b82dbbbe40dc1a63fbd,PodSandboxId:ca317fee7e45f929a7a49fc086359dbed106dd91b9f0314a3905acc81512bb7f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240917940332246,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-56c8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0ea73b-43c7-4df4-8668-8b92cb8fb3b7,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7501778e942c1c19fc2e5c13cfc32b454d768f6a68836d52b3f83626923caa,PodSandboxId:a8584a243d6cf16c4adcd1a11c964d705b0b26b1620d40910e4043e4a51202b4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721240917526399529,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3c60da9d22ce0ce59ea2fbc92cde7e,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:641ae7b729fdabe7cec074c478b8665cea9d5e8f9d9c9aa409456033fde46fa9,PodSandboxId:410de13dc1efc998ffdc1354f3c3a154ec93407f30dc2e403f654a39b45e55af,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721240917453859023,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5154dd7879a7c5868642b18cb707b5be,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d264268183e0a5541601f3847b52f73067d90a04128b338cbe909bbfd0f807fb,PodSandboxId:a1bfff998f9e19637d8b1bfe8a66063684ab97c30030ec679e3330fc74d16191,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c658136
9906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721240917270725020,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brcks,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52e9661-8ce7-4f9b-93c9-37e2da416871,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d24f32482e8886b266183fa33e3f2bc06e6db518b42b99d931912ab8018538f6,PodSandboxId:73f126226d0965c43272f537778f2f819517d5be0e48acbae1ba1b11f7fe9881,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddf
bced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721240917318970630,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a40c3db3fc7ccd881a50a29afdba42,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b54d73e40f2895061f0c5f14b321c294e124ae666e5c152ad5cdb5aae2a3c600,PodSandboxId:88f390c5b0fe92ff75002108506d2b228381c5b0eca6e79d9389cc53cbb9eca0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d1
7692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721240917174267696,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-778511,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fc6770cbe59d9f8ece456799804b1ec,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:816bc072557cae60207babba95f4ffa1ec8c354be7e5880bee6bc517129c5646,PodSandboxId:c1a6d5756fa6995bdab0be57824c80de66c235592397da93a14ae0bc299d3383,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721240901036845764,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-4trqm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5b773e-7aab-4565-96d7-b40017a3d127,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38995c01-d648-4bc7-9df6-7574bf09d254 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d90c627bb7e44       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   4e4308033271b       storage-provisioner
	be667d1057dc6       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   3 seconds ago       Running             kube-proxy                2                   75d06ccf7a79b       kube-proxy-brcks
	88139c21cb6c8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   ff3136021e079       coredns-5cfdc65f69-56c8x
	a676c8f61f5de       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   7 seconds ago       Running             etcd                      2                   f252c346aadec       etcd-kubernetes-upgrade-778511
	cdc44122be180       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   7 seconds ago       Running             kube-scheduler            2                   ecc3973bccdd6       kube-scheduler-kubernetes-upgrade-778511
	2df78f390c6eb       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   2                   59658c60c5037       kube-controller-manager-kubernetes-upgrade-778511
	e1c303945ebed       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            2                   61b6e1528922b       kube-apiserver-kubernetes-upgrade-778511
	ed18d2697e785       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 seconds ago       Running             coredns                   1                   69ae9f3d42cd8       coredns-5cfdc65f69-4trqm
	93a941ba36520       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   12 seconds ago      Exited              coredns                   1                   ca317fee7e45f       coredns-5cfdc65f69-56c8x
	2c7501778e942       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   12 seconds ago      Exited              etcd                      1                   a8584a243d6cf       etcd-kubernetes-upgrade-778511
	641ae7b729fda       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   12 seconds ago      Exited              kube-scheduler            1                   410de13dc1efc       kube-scheduler-kubernetes-upgrade-778511
	bac24e0c48a31       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Exited              storage-provisioner       1                   c3a9d7ade324b       storage-provisioner
	d24f32482e888       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   12 seconds ago      Exited              kube-apiserver            1                   73f126226d096       kube-apiserver-kubernetes-upgrade-778511
	d264268183e0a       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   12 seconds ago      Exited              kube-proxy                1                   a1bfff998f9e1       kube-proxy-brcks
	b54d73e40f289       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   13 seconds ago      Exited              kube-controller-manager   1                   88f390c5b0fe9       kube-controller-manager-kubernetes-upgrade-778511
	816bc072557ca       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   29 seconds ago      Exited              coredns                   0                   c1a6d5756fa69       coredns-5cfdc65f69-4trqm
	
	
	==> coredns [816bc072557cae60207babba95f4ffa1ec8c354be7e5880bee6bc517129c5646] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [88139c21cb6c865428affb6e5c8b404779f95dcd9785d1f01c8e7eff4122dbef] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [93a941ba3652077f31ae6f9b03e6d2654dd5a38fc5834b82dbbbe40dc1a63fbd] <==
	
	
	==> coredns [ed18d2697e7858e2d2b567628db59478fb3fb5ccfaf2275e67fc300de8813abe] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-778511
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-778511
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:28:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-778511
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:28:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:28:46 +0000   Wed, 17 Jul 2024 18:28:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:28:46 +0000   Wed, 17 Jul 2024 18:28:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:28:46 +0000   Wed, 17 Jul 2024 18:28:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:28:46 +0000   Wed, 17 Jul 2024 18:28:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.153
	  Hostname:    kubernetes-upgrade-778511
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e382958189d422fa26c60edaefe8efb
	  System UUID:                7e382958-189d-422f-a26c-60edaefe8efb
	  Boot ID:                    4e32958e-9b2e-43b8-b799-ebb445c142d9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-4trqm                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     30s
	  kube-system                 coredns-5cfdc65f69-56c8x                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     30s
	  kube-system                 etcd-kubernetes-upgrade-778511                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         36s
	  kube-system                 kube-apiserver-kubernetes-upgrade-778511             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-778511    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-proxy-brcks                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-scheduler-kubernetes-upgrade-778511             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  41s (x8 over 43s)  kubelet          Node kubernetes-upgrade-778511 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     41s (x7 over 43s)  kubelet          Node kubernetes-upgrade-778511 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    41s (x8 over 43s)  kubelet          Node kubernetes-upgrade-778511 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           30s                node-controller  Node kubernetes-upgrade-778511 event: Registered Node kubernetes-upgrade-778511 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-778511 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-778511 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-778511 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           0s                 node-controller  Node kubernetes-upgrade-778511 event: Registered Node kubernetes-upgrade-778511 in Controller
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.391699] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.062396] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054905] systemd-fstab-generator[576]: Ignoring "noauto" option for root device
	[  +0.210813] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[Jul17 18:28] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.281343] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +4.025536] systemd-fstab-generator[729]: Ignoring "noauto" option for root device
	[  +2.154167] systemd-fstab-generator[849]: Ignoring "noauto" option for root device
	[  +0.063756] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.581694] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	[  +0.087870] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.436072] kauditd_printk_skb: 18 callbacks suppressed
	[ +15.806486] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.089928] kauditd_printk_skb: 80 callbacks suppressed
	[  +0.057774] systemd-fstab-generator[2205]: Ignoring "noauto" option for root device
	[  +0.160944] systemd-fstab-generator[2219]: Ignoring "noauto" option for root device
	[  +0.153358] systemd-fstab-generator[2232]: Ignoring "noauto" option for root device
	[  +1.280881] systemd-fstab-generator[2850]: Ignoring "noauto" option for root device
	[  +1.382050] systemd-fstab-generator[3336]: Ignoring "noauto" option for root device
	[  +2.897368] systemd-fstab-generator[3972]: Ignoring "noauto" option for root device
	[  +0.086600] kauditd_printk_skb: 301 callbacks suppressed
	[  +5.154659] kauditd_printk_skb: 55 callbacks suppressed
	[  +0.582004] systemd-fstab-generator[4453]: Ignoring "noauto" option for root device
	
	
	==> etcd [2c7501778e942c1c19fc2e5c13cfc32b454d768f6a68836d52b3f83626923caa] <==
	{"level":"warn","ts":"2024-07-17T18:28:38.134194Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-17T18:28:38.134356Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.83.153:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.83.153:2380","--initial-cluster=kubernetes-upgrade-778511=https://192.168.83.153:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.83.153:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.83.153:2380","--name=kubernetes-upgrade-778511","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--sna
pshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-07-17T18:28:38.134556Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-07-17T18:28:38.134581Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-17T18:28:38.134595Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.83.153:2380"]}
	{"level":"info","ts":"2024-07-17T18:28:38.134667Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T18:28:38.15604Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.153:2379"]}
	{"level":"info","ts":"2024-07-17T18:28:38.157318Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.14","git-sha":"bf51a53a7","go-version":"go1.21.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-778511","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.83.153:2380"],"listen-peer-urls":["https://192.168.83.153:2380"],"advertise-client-urls":["https://192.168.83.153:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.153:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new
","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	
	
	==> etcd [a676c8f61f5de0b7983b218b883a05db3aaaafd3a2973cb5f2c5d5478c2b09a4] <==
	{"level":"info","ts":"2024-07-17T18:28:43.449053Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e51df2bd75c2636b","local-member-id":"baf9c43611ac1ee2","added-peer-id":"baf9c43611ac1ee2","added-peer-peer-urls":["https://192.168.83.153:2380"]}
	{"level":"info","ts":"2024-07-17T18:28:43.449179Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e51df2bd75c2636b","local-member-id":"baf9c43611ac1ee2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:28:43.449223Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:28:43.453328Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T18:28:43.455628Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T18:28:43.460822Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.83.153:2380"}
	{"level":"info","ts":"2024-07-17T18:28:43.460959Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.83.153:2380"}
	{"level":"info","ts":"2024-07-17T18:28:43.462132Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"baf9c43611ac1ee2","initial-advertise-peer-urls":["https://192.168.83.153:2380"],"listen-peer-urls":["https://192.168.83.153:2380"],"advertise-client-urls":["https://192.168.83.153:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.153:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T18:28:43.462814Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T18:28:44.601662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"baf9c43611ac1ee2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T18:28:44.601826Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"baf9c43611ac1ee2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:28:44.601892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"baf9c43611ac1ee2 received MsgPreVoteResp from baf9c43611ac1ee2 at term 2"}
	{"level":"info","ts":"2024-07-17T18:28:44.60194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"baf9c43611ac1ee2 became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T18:28:44.601971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"baf9c43611ac1ee2 received MsgVoteResp from baf9c43611ac1ee2 at term 3"}
	{"level":"info","ts":"2024-07-17T18:28:44.601998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"baf9c43611ac1ee2 became leader at term 3"}
	{"level":"info","ts":"2024-07-17T18:28:44.602032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: baf9c43611ac1ee2 elected leader baf9c43611ac1ee2 at term 3"}
	{"level":"info","ts":"2024-07-17T18:28:44.607841Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"baf9c43611ac1ee2","local-member-attributes":"{Name:kubernetes-upgrade-778511 ClientURLs:[https://192.168.83.153:2379]}","request-path":"/0/members/baf9c43611ac1ee2/attributes","cluster-id":"e51df2bd75c2636b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:28:44.608098Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:28:44.608213Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:28:44.608258Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T18:28:44.608326Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:28:44.609171Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T18:28:44.609352Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T18:28:44.610018Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T18:28:44.610526Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.153:2379"}
	
	
	==> kernel <==
	 18:28:50 up 1 min,  0 users,  load average: 1.32, 0.39, 0.14
	Linux kubernetes-upgrade-778511 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d24f32482e8886b266183fa33e3f2bc06e6db518b42b99d931912ab8018538f6] <==
	
	
	==> kube-apiserver [e1c303945ebed05093bcc09bf65a52a94c3b79936cb07c023d9cc2533acb0e1b] <==
	I0717 18:28:46.021073       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 18:28:46.096481       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 18:28:46.102901       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 18:28:46.102941       1 policy_source.go:224] refreshing policies
	I0717 18:28:46.105201       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 18:28:46.129494       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 18:28:46.129695       1 aggregator.go:171] initial CRD sync complete...
	I0717 18:28:46.129731       1 autoregister_controller.go:144] Starting autoregister controller
	I0717 18:28:46.129785       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 18:28:46.129833       1 cache.go:39] Caches are synced for autoregister controller
	I0717 18:28:46.130881       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 18:28:46.144241       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 18:28:46.144372       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0717 18:28:46.144399       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0717 18:28:46.149852       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 18:28:46.150701       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 18:28:46.157369       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0717 18:28:46.984906       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 18:28:47.131891       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 18:28:47.752455       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 18:28:47.785409       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 18:28:47.830933       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 18:28:47.931965       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 18:28:47.939055       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 18:28:50.522099       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2df78f390c6ebdd6460720fe3bd8a85bf98d429b3dcd7d0a10a7e8d0d9332667] <==
	I0717 18:28:50.407708       1 shared_informer.go:320] Caches are synced for stateful set
	I0717 18:28:50.414662       1 shared_informer.go:320] Caches are synced for endpoint
	I0717 18:28:50.419384       1 shared_informer.go:320] Caches are synced for expand
	I0717 18:28:50.451601       1 shared_informer.go:320] Caches are synced for ephemeral
	I0717 18:28:50.452926       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0717 18:28:50.454164       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 18:28:50.458828       1 shared_informer.go:320] Caches are synced for crt configmap
	I0717 18:28:50.465129       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0717 18:28:50.465203       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0717 18:28:50.465240       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0717 18:28:50.465249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0717 18:28:50.470858       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0717 18:28:50.470986       1 shared_informer.go:320] Caches are synced for attach detach
	I0717 18:28:50.474150       1 shared_informer.go:320] Caches are synced for PV protection
	I0717 18:28:50.489498       1 shared_informer.go:320] Caches are synced for PVC protection
	I0717 18:28:50.503220       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 18:28:50.503402       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-778511"
	I0717 18:28:50.503915       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0717 18:28:50.519684       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 18:28:50.520001       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 18:28:50.558206       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 18:28:50.578853       1 shared_informer.go:320] Caches are synced for service account
	I0717 18:28:50.597245       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 18:28:50.597277       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 18:28:50.615200       1 shared_informer.go:320] Caches are synced for namespace
	
	
	==> kube-controller-manager [b54d73e40f2895061f0c5f14b321c294e124ae666e5c152ad5cdb5aae2a3c600] <==
	
	
	==> kube-proxy [be667d1057dc62a19bb193fd62212ea6fa0b5b1a8a45e36a7c5c05ce0fc3e81e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 18:28:47.232585       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0717 18:28:47.256881       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.83.153"]
	E0717 18:28:47.257030       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0717 18:28:47.308167       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0717 18:28:47.308218       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:28:47.308250       1 server_linux.go:170] "Using iptables Proxier"
	I0717 18:28:47.311300       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0717 18:28:47.311601       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0717 18:28:47.311636       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:28:47.316080       1 config.go:197] "Starting service config controller"
	I0717 18:28:47.316113       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:28:47.316143       1 config.go:104] "Starting endpoint slice config controller"
	I0717 18:28:47.316150       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:28:47.316897       1 config.go:326] "Starting node config controller"
	I0717 18:28:47.316916       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:28:47.417182       1 shared_informer.go:320] Caches are synced for node config
	I0717 18:28:47.417261       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:28:47.417291       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d264268183e0a5541601f3847b52f73067d90a04128b338cbe909bbfd0f807fb] <==
	
	
	==> kube-scheduler [641ae7b729fdabe7cec074c478b8665cea9d5e8f9d9c9aa409456033fde46fa9] <==
	
	
	==> kube-scheduler [cdc44122be18028f959c0bfd5a058ce0ea0f2f0b33bd06cf9c6b8812dd160dda] <==
	I0717 18:28:43.741946       1 serving.go:386] Generated self-signed cert in-memory
	W0717 18:28:46.059792       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 18:28:46.059829       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:28:46.059878       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 18:28:46.059885       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 18:28:46.128652       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0717 18:28:46.128699       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:28:46.137209       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 18:28:46.139855       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0717 18:28:46.139991       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 18:28:46.140051       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 18:28:46.240656       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 18:28:42 kubernetes-upgrade-778511 kubelet[3979]: E0717 18:28:42.679179    3979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-778511?timeout=10s\": dial tcp 192.168.83.153:8443: connect: connection refused" interval="400ms"
	Jul 17 18:28:42 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:42.781300    3979 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-778511"
	Jul 17 18:28:42 kubernetes-upgrade-778511 kubelet[3979]: E0717 18:28:42.783320    3979 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.153:8443: connect: connection refused" node="kubernetes-upgrade-778511"
	Jul 17 18:28:42 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:42.967360    3979 scope.go:117] "RemoveContainer" containerID="d24f32482e8886b266183fa33e3f2bc06e6db518b42b99d931912ab8018538f6"
	Jul 17 18:28:42 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:42.970885    3979 scope.go:117] "RemoveContainer" containerID="b54d73e40f2895061f0c5f14b321c294e124ae666e5c152ad5cdb5aae2a3c600"
	Jul 17 18:28:42 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:42.974158    3979 scope.go:117] "RemoveContainer" containerID="641ae7b729fdabe7cec074c478b8665cea9d5e8f9d9c9aa409456033fde46fa9"
	Jul 17 18:28:42 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:42.995897    3979 scope.go:117] "RemoveContainer" containerID="2c7501778e942c1c19fc2e5c13cfc32b454d768f6a68836d52b3f83626923caa"
	Jul 17 18:28:43 kubernetes-upgrade-778511 kubelet[3979]: E0717 18:28:43.080989    3979 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-778511?timeout=10s\": dial tcp 192.168.83.153:8443: connect: connection refused" interval="800ms"
	Jul 17 18:28:43 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:43.185119    3979 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-778511"
	Jul 17 18:28:43 kubernetes-upgrade-778511 kubelet[3979]: E0717 18:28:43.186097    3979 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.83.153:8443: connect: connection refused" node="kubernetes-upgrade-778511"
	Jul 17 18:28:43 kubernetes-upgrade-778511 kubelet[3979]: W0717 18:28:43.274773    3979 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-778511&limit=500&resourceVersion=0": dial tcp 192.168.83.153:8443: connect: connection refused
	Jul 17 18:28:43 kubernetes-upgrade-778511 kubelet[3979]: E0717 18:28:43.274870    3979 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-778511&limit=500&resourceVersion=0\": dial tcp 192.168.83.153:8443: connect: connection refused" logger="UnhandledError"
	Jul 17 18:28:43 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:43.988697    3979 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-778511"
	Jul 17 18:28:46 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:46.165434    3979 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-778511"
	Jul 17 18:28:46 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:46.165691    3979 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-778511"
	Jul 17 18:28:46 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:46.165824    3979 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 18:28:46 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:46.167662    3979 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 18:28:46 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:46.453029    3979 apiserver.go:52] "Watching apiserver"
	Jul 17 18:28:46 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:46.475426    3979 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 17 18:28:46 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:46.476428    3979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a52e9661-8ce7-4f9b-93c9-37e2da416871-lib-modules\") pod \"kube-proxy-brcks\" (UID: \"a52e9661-8ce7-4f9b-93c9-37e2da416871\") " pod="kube-system/kube-proxy-brcks"
	Jul 17 18:28:46 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:46.476834    3979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/02a7f177-5f7c-4601-b308-1e9713ce66ab-tmp\") pod \"storage-provisioner\" (UID: \"02a7f177-5f7c-4601-b308-1e9713ce66ab\") " pod="kube-system/storage-provisioner"
	Jul 17 18:28:46 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:46.477536    3979 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a52e9661-8ce7-4f9b-93c9-37e2da416871-xtables-lock\") pod \"kube-proxy-brcks\" (UID: \"a52e9661-8ce7-4f9b-93c9-37e2da416871\") " pod="kube-system/kube-proxy-brcks"
	Jul 17 18:28:46 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:46.764146    3979 scope.go:117] "RemoveContainer" containerID="d264268183e0a5541601f3847b52f73067d90a04128b338cbe909bbfd0f807fb"
	Jul 17 18:28:46 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:46.764795    3979 scope.go:117] "RemoveContainer" containerID="bac24e0c48a31efb33c2f7bb15d71055a8ec2ed12757c1620e32472cc8d3b739"
	Jul 17 18:28:46 kubernetes-upgrade-778511 kubelet[3979]: I0717 18:28:46.765338    3979 scope.go:117] "RemoveContainer" containerID="93a941ba3652077f31ae6f9b03e6d2654dd5a38fc5834b82dbbbe40dc1a63fbd"
	
	
	==> storage-provisioner [bac24e0c48a31efb33c2f7bb15d71055a8ec2ed12757c1620e32472cc8d3b739] <==
	
	
	==> storage-provisioner [d90c627bb7e44e60dfff38c2672dbd17c9d2477b1e1bd6d1ade740c20c4c7ae3] <==
	I0717 18:28:47.071863       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:28:47.115594       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:28:47.115810       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:28:47.147721       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:28:47.148146       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-778511_faf6775a-3826-4deb-bb56-6cbeb48d3cb3!
	I0717 18:28:47.149489       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99a47793-d016-4955-99b8-5e7e5e1993a3", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-778511_faf6775a-3826-4deb-bb56-6cbeb48d3cb3 became leader
	I0717 18:28:47.251021       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-778511_faf6775a-3826-4deb-bb56-6cbeb48d3cb3!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:28:49.610604   70864 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19283-14386/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-778511 -n kubernetes-upgrade-778511
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-778511 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-778511" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-778511
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-778511: (1.117355973s)
--- FAIL: TestKubernetesUpgrade (342.82s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (438.82s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-371172 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0717 18:25:41.791557   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-371172 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (7m15.213105384s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-371172] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-371172" primary control-plane node in "pause-371172" cluster
	* Updating the running kvm2 "pause-371172" VM ...
	* Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-371172" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:25:35.871765   64770 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:25:35.872022   64770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:25:35.872033   64770 out.go:304] Setting ErrFile to fd 2...
	I0717 18:25:35.872037   64770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:25:35.872248   64770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:25:35.872929   64770 out.go:298] Setting JSON to false
	I0717 18:25:35.873966   64770 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7679,"bootTime":1721233057,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:25:35.874039   64770 start.go:139] virtualization: kvm guest
	I0717 18:25:35.876248   64770 out.go:177] * [pause-371172] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:25:35.877572   64770 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:25:35.877613   64770 notify.go:220] Checking for updates...
	I0717 18:25:35.879992   64770 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:25:35.881204   64770 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:25:35.882473   64770 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:25:35.883994   64770 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:25:35.885258   64770 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:25:35.887109   64770 config.go:182] Loaded profile config "pause-371172": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:25:35.887740   64770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:25:35.887819   64770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:25:35.908653   64770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43481
	I0717 18:25:35.909124   64770 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:25:35.909774   64770 main.go:141] libmachine: Using API Version  1
	I0717 18:25:35.909804   64770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:25:35.910218   64770 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:25:35.910437   64770 main.go:141] libmachine: (pause-371172) Calling .DriverName
	I0717 18:25:35.910837   64770 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:25:35.911256   64770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:25:35.911304   64770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:25:35.928060   64770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39419
	I0717 18:25:35.928571   64770 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:25:35.929109   64770 main.go:141] libmachine: Using API Version  1
	I0717 18:25:35.929133   64770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:25:35.929558   64770 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:25:35.929736   64770 main.go:141] libmachine: (pause-371172) Calling .DriverName
	I0717 18:25:35.972821   64770 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:25:35.974166   64770 start.go:297] selected driver: kvm2
	I0717 18:25:35.974199   64770 start.go:901] validating driver "kvm2" against &{Name:pause-371172 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.2 ClusterName:pause-371172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:25:35.974402   64770 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:25:35.974810   64770 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:25:35.974892   64770 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:25:35.993273   64770 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:25:35.994217   64770 cni.go:84] Creating CNI manager for ""
	I0717 18:25:35.994237   64770 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:25:35.994315   64770 start.go:340] cluster config:
	{Name:pause-371172 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:pause-371172 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:25:35.994502   64770 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:25:35.996702   64770 out.go:177] * Starting "pause-371172" primary control-plane node in "pause-371172" cluster
	I0717 18:25:35.997993   64770 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:25:35.998036   64770 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:25:35.998044   64770 cache.go:56] Caching tarball of preloaded images
	I0717 18:25:35.998115   64770 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:25:35.998129   64770 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:25:35.998246   64770 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/pause-371172/config.json ...
	I0717 18:25:35.998502   64770 start.go:360] acquireMachinesLock for pause-371172: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:25:40.426146   64770 start.go:364] duration metric: took 4.427611284s to acquireMachinesLock for "pause-371172"
	I0717 18:25:40.426211   64770 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:25:40.426219   64770 fix.go:54] fixHost starting: 
	I0717 18:25:40.426734   64770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:25:40.426789   64770 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:25:40.446279   64770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46587
	I0717 18:25:40.446731   64770 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:25:40.447271   64770 main.go:141] libmachine: Using API Version  1
	I0717 18:25:40.447296   64770 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:25:40.447641   64770 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:25:40.447826   64770 main.go:141] libmachine: (pause-371172) Calling .DriverName
	I0717 18:25:40.447994   64770 main.go:141] libmachine: (pause-371172) Calling .GetState
	I0717 18:25:40.449604   64770 fix.go:112] recreateIfNeeded on pause-371172: state=Running err=<nil>
	W0717 18:25:40.449638   64770 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:25:40.600077   64770 out.go:177] * Updating the running kvm2 "pause-371172" VM ...
	I0717 18:25:40.763530   64770 machine.go:94] provisionDockerMachine start ...
	I0717 18:25:40.763583   64770 main.go:141] libmachine: (pause-371172) Calling .DriverName
	I0717 18:25:40.763866   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHHostname
	I0717 18:25:40.766524   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:40.766929   64770 main.go:141] libmachine: (pause-371172) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:3f:89", ip: ""} in network mk-pause-371172: {Iface:virbr2 ExpiryTime:2024-07-17 19:24:13 +0000 UTC Type:0 Mac:52:54:00:07:3f:89 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-371172 Clientid:01:52:54:00:07:3f:89}
	I0717 18:25:40.766958   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined IP address 192.168.50.21 and MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:40.767086   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHPort
	I0717 18:25:40.767279   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:40.767439   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:40.767577   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHUsername
	I0717 18:25:40.767750   64770 main.go:141] libmachine: Using SSH client type: native
	I0717 18:25:40.767986   64770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0717 18:25:40.768000   64770 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:25:40.881349   64770 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-371172
	
	I0717 18:25:40.881381   64770 main.go:141] libmachine: (pause-371172) Calling .GetMachineName
	I0717 18:25:40.881605   64770 buildroot.go:166] provisioning hostname "pause-371172"
	I0717 18:25:40.881636   64770 main.go:141] libmachine: (pause-371172) Calling .GetMachineName
	I0717 18:25:40.881800   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHHostname
	I0717 18:25:40.884681   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:40.885063   64770 main.go:141] libmachine: (pause-371172) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:3f:89", ip: ""} in network mk-pause-371172: {Iface:virbr2 ExpiryTime:2024-07-17 19:24:13 +0000 UTC Type:0 Mac:52:54:00:07:3f:89 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-371172 Clientid:01:52:54:00:07:3f:89}
	I0717 18:25:40.885091   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined IP address 192.168.50.21 and MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:40.885257   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHPort
	I0717 18:25:40.885432   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:40.885602   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:40.885728   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHUsername
	I0717 18:25:40.885889   64770 main.go:141] libmachine: Using SSH client type: native
	I0717 18:25:40.886094   64770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0717 18:25:40.886109   64770 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-371172 && echo "pause-371172" | sudo tee /etc/hostname
	I0717 18:25:41.025365   64770 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-371172
	
	I0717 18:25:41.025398   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHHostname
	I0717 18:25:41.028536   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:41.028935   64770 main.go:141] libmachine: (pause-371172) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:3f:89", ip: ""} in network mk-pause-371172: {Iface:virbr2 ExpiryTime:2024-07-17 19:24:13 +0000 UTC Type:0 Mac:52:54:00:07:3f:89 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-371172 Clientid:01:52:54:00:07:3f:89}
	I0717 18:25:41.028998   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined IP address 192.168.50.21 and MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:41.029203   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHPort
	I0717 18:25:41.029418   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:41.029580   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:41.029774   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHUsername
	I0717 18:25:41.029970   64770 main.go:141] libmachine: Using SSH client type: native
	I0717 18:25:41.030172   64770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0717 18:25:41.030191   64770 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-371172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-371172/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-371172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:25:41.150334   64770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:25:41.150369   64770 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:25:41.150410   64770 buildroot.go:174] setting up certificates
	I0717 18:25:41.150446   64770 provision.go:84] configureAuth start
	I0717 18:25:41.150464   64770 main.go:141] libmachine: (pause-371172) Calling .GetMachineName
	I0717 18:25:41.150798   64770 main.go:141] libmachine: (pause-371172) Calling .GetIP
	I0717 18:25:41.154079   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:41.154577   64770 main.go:141] libmachine: (pause-371172) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:3f:89", ip: ""} in network mk-pause-371172: {Iface:virbr2 ExpiryTime:2024-07-17 19:24:13 +0000 UTC Type:0 Mac:52:54:00:07:3f:89 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-371172 Clientid:01:52:54:00:07:3f:89}
	I0717 18:25:41.154600   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined IP address 192.168.50.21 and MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:41.154757   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHHostname
	I0717 18:25:41.157689   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:41.158045   64770 main.go:141] libmachine: (pause-371172) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:3f:89", ip: ""} in network mk-pause-371172: {Iface:virbr2 ExpiryTime:2024-07-17 19:24:13 +0000 UTC Type:0 Mac:52:54:00:07:3f:89 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-371172 Clientid:01:52:54:00:07:3f:89}
	I0717 18:25:41.158065   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined IP address 192.168.50.21 and MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:41.158275   64770 provision.go:143] copyHostCerts
	I0717 18:25:41.158365   64770 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:25:41.158395   64770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:25:41.158465   64770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:25:41.158621   64770 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:25:41.158637   64770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:25:41.158674   64770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:25:41.158786   64770 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:25:41.158800   64770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:25:41.158830   64770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:25:41.158917   64770 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.pause-371172 san=[127.0.0.1 192.168.50.21 localhost minikube pause-371172]
	I0717 18:25:41.227864   64770 provision.go:177] copyRemoteCerts
	I0717 18:25:41.227923   64770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:25:41.227960   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHHostname
	I0717 18:25:41.231048   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:41.231490   64770 main.go:141] libmachine: (pause-371172) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:3f:89", ip: ""} in network mk-pause-371172: {Iface:virbr2 ExpiryTime:2024-07-17 19:24:13 +0000 UTC Type:0 Mac:52:54:00:07:3f:89 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-371172 Clientid:01:52:54:00:07:3f:89}
	I0717 18:25:41.231543   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined IP address 192.168.50.21 and MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:41.231778   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHPort
	I0717 18:25:41.231971   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:41.232133   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHUsername
	I0717 18:25:41.232270   64770 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/pause-371172/id_rsa Username:docker}
	I0717 18:25:41.323867   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:25:41.349019   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:25:41.377624   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 18:25:41.409133   64770 provision.go:87] duration metric: took 258.667992ms to configureAuth
	I0717 18:25:41.409170   64770 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:25:41.409487   64770 config.go:182] Loaded profile config "pause-371172": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:25:41.409592   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHHostname
	I0717 18:25:41.412213   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:41.412715   64770 main.go:141] libmachine: (pause-371172) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:3f:89", ip: ""} in network mk-pause-371172: {Iface:virbr2 ExpiryTime:2024-07-17 19:24:13 +0000 UTC Type:0 Mac:52:54:00:07:3f:89 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-371172 Clientid:01:52:54:00:07:3f:89}
	I0717 18:25:41.412742   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined IP address 192.168.50.21 and MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:41.412925   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHPort
	I0717 18:25:41.413154   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:41.413361   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:41.413553   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHUsername
	I0717 18:25:41.413720   64770 main.go:141] libmachine: Using SSH client type: native
	I0717 18:25:41.413890   64770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0717 18:25:41.413904   64770 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:25:47.019927   64770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:25:47.019961   64770 machine.go:97] duration metric: took 6.256392636s to provisionDockerMachine
	I0717 18:25:47.020009   64770 start.go:293] postStartSetup for "pause-371172" (driver="kvm2")
	I0717 18:25:47.020094   64770 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:25:47.020126   64770 main.go:141] libmachine: (pause-371172) Calling .DriverName
	I0717 18:25:47.020515   64770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:25:47.020546   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHHostname
	I0717 18:25:47.023792   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:47.024180   64770 main.go:141] libmachine: (pause-371172) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:3f:89", ip: ""} in network mk-pause-371172: {Iface:virbr2 ExpiryTime:2024-07-17 19:24:13 +0000 UTC Type:0 Mac:52:54:00:07:3f:89 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-371172 Clientid:01:52:54:00:07:3f:89}
	I0717 18:25:47.024211   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined IP address 192.168.50.21 and MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:47.024393   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHPort
	I0717 18:25:47.024592   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:47.024779   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHUsername
	I0717 18:25:47.024964   64770 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/pause-371172/id_rsa Username:docker}
	I0717 18:25:47.118441   64770 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:25:47.123712   64770 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:25:47.123739   64770 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:25:47.123817   64770 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:25:47.123922   64770 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:25:47.124027   64770 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:25:47.134751   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:25:47.169191   64770 start.go:296] duration metric: took 149.127812ms for postStartSetup
	I0717 18:25:47.169237   64770 fix.go:56] duration metric: took 6.743018481s for fixHost
	I0717 18:25:47.169314   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHHostname
	I0717 18:25:47.172619   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:47.173026   64770 main.go:141] libmachine: (pause-371172) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:3f:89", ip: ""} in network mk-pause-371172: {Iface:virbr2 ExpiryTime:2024-07-17 19:24:13 +0000 UTC Type:0 Mac:52:54:00:07:3f:89 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-371172 Clientid:01:52:54:00:07:3f:89}
	I0717 18:25:47.173063   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined IP address 192.168.50.21 and MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:47.173328   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHPort
	I0717 18:25:47.173551   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:47.173721   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:47.173822   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHUsername
	I0717 18:25:47.173953   64770 main.go:141] libmachine: Using SSH client type: native
	I0717 18:25:47.174165   64770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I0717 18:25:47.174182   64770 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 18:25:47.469047   64770 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721240747.459200870
	
	I0717 18:25:47.469067   64770 fix.go:216] guest clock: 1721240747.459200870
	I0717 18:25:47.469076   64770 fix.go:229] Guest: 2024-07-17 18:25:47.45920087 +0000 UTC Remote: 2024-07-17 18:25:47.16924866 +0000 UTC m=+11.341138862 (delta=289.95221ms)
	I0717 18:25:47.469094   64770 fix.go:200] guest clock delta is within tolerance: 289.95221ms
	I0717 18:25:47.469099   64770 start.go:83] releasing machines lock for "pause-371172", held for 7.042908985s
	I0717 18:25:47.469121   64770 main.go:141] libmachine: (pause-371172) Calling .DriverName
	I0717 18:25:47.469403   64770 main.go:141] libmachine: (pause-371172) Calling .GetIP
	I0717 18:25:47.472329   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:47.472770   64770 main.go:141] libmachine: (pause-371172) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:3f:89", ip: ""} in network mk-pause-371172: {Iface:virbr2 ExpiryTime:2024-07-17 19:24:13 +0000 UTC Type:0 Mac:52:54:00:07:3f:89 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-371172 Clientid:01:52:54:00:07:3f:89}
	I0717 18:25:47.472816   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined IP address 192.168.50.21 and MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:47.473052   64770 main.go:141] libmachine: (pause-371172) Calling .DriverName
	I0717 18:25:47.473696   64770 main.go:141] libmachine: (pause-371172) Calling .DriverName
	I0717 18:25:47.473882   64770 main.go:141] libmachine: (pause-371172) Calling .DriverName
	I0717 18:25:47.474001   64770 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:25:47.474054   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHHostname
	I0717 18:25:47.474173   64770 ssh_runner.go:195] Run: cat /version.json
	I0717 18:25:47.474203   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHHostname
	I0717 18:25:47.477515   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:47.477644   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:47.478047   64770 main.go:141] libmachine: (pause-371172) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:3f:89", ip: ""} in network mk-pause-371172: {Iface:virbr2 ExpiryTime:2024-07-17 19:24:13 +0000 UTC Type:0 Mac:52:54:00:07:3f:89 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-371172 Clientid:01:52:54:00:07:3f:89}
	I0717 18:25:47.478067   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined IP address 192.168.50.21 and MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:47.478103   64770 main.go:141] libmachine: (pause-371172) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:3f:89", ip: ""} in network mk-pause-371172: {Iface:virbr2 ExpiryTime:2024-07-17 19:24:13 +0000 UTC Type:0 Mac:52:54:00:07:3f:89 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-371172 Clientid:01:52:54:00:07:3f:89}
	I0717 18:25:47.478145   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined IP address 192.168.50.21 and MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:25:47.478236   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHPort
	I0717 18:25:47.478279   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHPort
	I0717 18:25:47.478413   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:47.478471   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHKeyPath
	I0717 18:25:47.478569   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHUsername
	I0717 18:25:47.478631   64770 main.go:141] libmachine: (pause-371172) Calling .GetSSHUsername
	I0717 18:25:47.478723   64770 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/pause-371172/id_rsa Username:docker}
	I0717 18:25:47.478882   64770 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/pause-371172/id_rsa Username:docker}
	I0717 18:25:47.634866   64770 ssh_runner.go:195] Run: systemctl --version
	I0717 18:25:47.713944   64770 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:25:48.039270   64770 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:25:48.060659   64770 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:25:48.060798   64770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:25:48.092233   64770 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 18:25:48.092260   64770 start.go:495] detecting cgroup driver to use...
	I0717 18:25:48.092363   64770 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:25:48.112225   64770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:25:48.135001   64770 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:25:48.135116   64770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:25:48.231747   64770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:25:48.265850   64770 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:25:48.518289   64770 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:25:48.684235   64770 docker.go:233] disabling docker service ...
	I0717 18:25:48.684308   64770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:25:48.703081   64770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:25:48.717260   64770 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:25:48.919895   64770 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:25:49.111545   64770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:25:49.131659   64770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:25:49.159295   64770 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:25:49.159369   64770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:25:49.175360   64770 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:25:49.175429   64770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:25:49.191054   64770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:25:49.205200   64770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:25:49.222065   64770 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:25:49.234329   64770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:25:49.257918   64770 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:25:49.273981   64770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:25:49.286796   64770 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:25:49.297575   64770 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:25:49.311671   64770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:25:49.512747   64770 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:27:19.877229   64770 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.364439487s)
	I0717 18:27:19.877267   64770 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:27:19.877325   64770 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:27:19.883555   64770 start.go:563] Will wait 60s for crictl version
	I0717 18:27:19.883619   64770 ssh_runner.go:195] Run: which crictl
	I0717 18:27:19.887902   64770 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:27:19.929509   64770 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:27:19.929604   64770 ssh_runner.go:195] Run: crio --version
	I0717 18:27:19.957648   64770 ssh_runner.go:195] Run: crio --version
	I0717 18:27:19.986126   64770 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:27:19.987437   64770 main.go:141] libmachine: (pause-371172) Calling .GetIP
	I0717 18:27:19.990394   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:27:19.990730   64770 main.go:141] libmachine: (pause-371172) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:3f:89", ip: ""} in network mk-pause-371172: {Iface:virbr2 ExpiryTime:2024-07-17 19:24:13 +0000 UTC Type:0 Mac:52:54:00:07:3f:89 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:pause-371172 Clientid:01:52:54:00:07:3f:89}
	I0717 18:27:19.990755   64770 main.go:141] libmachine: (pause-371172) DBG | domain pause-371172 has defined IP address 192.168.50.21 and MAC address 52:54:00:07:3f:89 in network mk-pause-371172
	I0717 18:27:19.990970   64770 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 18:27:19.995429   64770 kubeadm.go:883] updating cluster {Name:pause-371172 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2
ClusterName:pause-371172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:27:19.995556   64770 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:27:19.995607   64770 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:27:20.040498   64770 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:27:20.040522   64770 crio.go:433] Images already preloaded, skipping extraction
	I0717 18:27:20.040590   64770 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:27:20.074889   64770 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:27:20.074911   64770 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:27:20.074918   64770 kubeadm.go:934] updating node { 192.168.50.21 8443 v1.30.2 crio true true} ...
	I0717 18:27:20.075030   64770 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-371172 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:pause-371172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:27:20.075110   64770 ssh_runner.go:195] Run: crio config
	I0717 18:27:20.122792   64770 cni.go:84] Creating CNI manager for ""
	I0717 18:27:20.122815   64770 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:27:20.122833   64770 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:27:20.122851   64770 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.21 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-371172 NodeName:pause-371172 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:27:20.122983   64770 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-371172"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:27:20.123046   64770 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:27:20.132807   64770 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:27:20.132879   64770 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:27:20.141804   64770 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0717 18:27:20.158509   64770 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:27:20.175664   64770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0717 18:27:20.191763   64770 ssh_runner.go:195] Run: grep 192.168.50.21	control-plane.minikube.internal$ /etc/hosts
	I0717 18:27:20.195714   64770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:27:20.336736   64770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:27:20.353284   64770 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/pause-371172 for IP: 192.168.50.21
	I0717 18:27:20.353313   64770 certs.go:194] generating shared ca certs ...
	I0717 18:27:20.353331   64770 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:27:20.353517   64770 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:27:20.353574   64770 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:27:20.353586   64770 certs.go:256] generating profile certs ...
	I0717 18:27:20.353694   64770 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/pause-371172/client.key
	I0717 18:27:20.353799   64770 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/pause-371172/apiserver.key.71dbf864
	I0717 18:27:20.353852   64770 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/pause-371172/proxy-client.key
	I0717 18:27:20.353978   64770 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:27:20.354017   64770 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:27:20.354030   64770 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:27:20.354068   64770 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:27:20.354100   64770 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:27:20.354127   64770 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:27:20.354182   64770 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:27:20.355093   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:27:20.381590   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:27:20.404846   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:27:20.429775   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:27:20.454961   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/pause-371172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 18:27:20.518363   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/pause-371172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:27:20.608457   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/pause-371172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:27:20.683242   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/pause-371172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:27:20.748165   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:27:20.858124   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:27:20.932215   64770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:27:20.987422   64770 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:27:21.015012   64770 ssh_runner.go:195] Run: openssl version
	I0717 18:27:21.035761   64770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:27:21.060476   64770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:27:21.065646   64770 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:27:21.065699   64770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:27:21.079914   64770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:27:21.099857   64770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:27:21.111974   64770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:27:21.118854   64770 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:27:21.118907   64770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:27:21.124778   64770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:27:21.141088   64770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:27:21.156277   64770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:27:21.161657   64770 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:27:21.161724   64770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:27:21.168801   64770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:27:21.187152   64770 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:27:21.196231   64770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:27:21.208788   64770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:27:21.219835   64770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:27:21.232800   64770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:27:21.242829   64770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:27:21.252740   64770 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:27:21.265217   64770 kubeadm.go:392] StartCluster: {Name:pause-371172 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 Cl
usterName:pause-371172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:27:21.265345   64770 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:27:21.265408   64770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:27:21.333765   64770 cri.go:89] found id: "97587d90d96081fe890fcfe6a27ff8d6b3325c318bb10a34567816944a716c25"
	I0717 18:27:21.333793   64770 cri.go:89] found id: "66c24efb0fdaf6d4e2e49ae452a0404d1c05ee0306287386724cab483b59d07e"
	I0717 18:27:21.333799   64770 cri.go:89] found id: "2fd577605ab4d6c34add83a14a7738e18f0cc32838fd21eb0c9d519b493ff7c7"
	I0717 18:27:21.333803   64770 cri.go:89] found id: "f712883dac0e3a23c06d1f62f744223b48038ab5113e51da67d580c0fdd78262"
	I0717 18:27:21.333808   64770 cri.go:89] found id: "8c62ab7abc602d7caffff7810c541eb2963cbf38c7f41837dd20f4c9e27c8235"
	I0717 18:27:21.333813   64770 cri.go:89] found id: "0c3be78cf9814b628db65eb7bf63018d74934f051b6844270628a7725add971f"
	I0717 18:27:21.333817   64770 cri.go:89] found id: "cdb5eedb37fb2d80468e0ae1824c48d5d7e76e9e6dbadf173f709dd2272a3bda"
	I0717 18:27:21.333822   64770 cri.go:89] found id: "329f58482d317f24ff3b2657aed6232e874be88f6b4ffe94cd16c176569744d7"
	I0717 18:27:21.333827   64770 cri.go:89] found id: "e7998fa28de27d5c39c480bf5c32bc5f958013edfaa022b5d497ddc080553b94"
	I0717 18:27:21.333838   64770 cri.go:89] found id: "b8330af434a888920daf6c37729496a57e47776eadc9c885caae06d6f794cf31"
	I0717 18:27:21.333846   64770 cri.go:89] found id: "c64550782047c5258731fccb7f980d8f4c0693a551658e2d1406a7b485e7e839"
	I0717 18:27:21.333851   64770 cri.go:89] found id: "6a76998655bc71d7c9dc758b399e479c27f102ec3533bf17ce8843db0a5e82f7"
	I0717 18:27:21.333858   64770 cri.go:89] found id: "f01c136ece4c8e71f985b7709634f95a87eeffd224c19908ff34583231b40ed1"
	I0717 18:27:21.333864   64770 cri.go:89] found id: "c50d3524e6a32c86f2fe706d87211f8aaf308d5bcbac13032dbbeb6dae577c4c"
	I0717 18:27:21.333879   64770 cri.go:89] found id: ""
	I0717 18:27:21.333931   64770 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-371172 -n pause-371172
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-371172 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-371172 logs -n 25: (1.076490778s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo journalctl -xeu kubelet                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo cat                                              |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo cat                                              |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC |                     |
	|         | sudo systemctl status docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo cat                                              |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC |                     |
	|         | sudo docker system info                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC |                     |
	|         | sudo systemctl status                                 |                           |         |         |                     |                     |
	|         | cri-docker --all --full                               |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat cri-docker                         |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476 sudo cat                 | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf  |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476 sudo cat                 | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo cri-dockerd --version                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC |                     |
	|         | sudo systemctl status                                 |                           |         |         |                     |                     |
	|         | containerd --all --full                               |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat containerd                         |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476 sudo cat                 | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | /lib/systemd/system/containerd.service                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo cat                                              |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo containerd config dump                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl status crio                            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat crio                               |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo find /etc/crio -type f                           |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                         |                           |         |         |                     |                     |
	|         | \;                                                    |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo crio config                                      |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	| start   | -p embed-certs-527415                                 | embed-certs-527415        | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:32 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                          |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-527415           | embed-certs-527415        | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p embed-certs-527415                                 | embed-certs-527415        | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:31:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:31:22.639596   77994 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:31:22.639939   77994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:31:22.639964   77994 out.go:304] Setting ErrFile to fd 2...
	I0717 18:31:22.639998   77994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:31:22.640332   77994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:31:22.640886   77994 out.go:298] Setting JSON to false
	I0717 18:31:22.641905   77994 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8026,"bootTime":1721233057,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:31:22.641959   77994 start.go:139] virtualization: kvm guest
	I0717 18:31:22.644248   77994 out.go:177] * [embed-certs-527415] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:31:22.645764   77994 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:31:22.645790   77994 notify.go:220] Checking for updates...
	I0717 18:31:22.648446   77994 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:31:22.649650   77994 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:31:22.650971   77994 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:31:22.652267   77994 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:31:22.653530   77994 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:31:22.655045   77994 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:31:22.655130   77994 config.go:182] Loaded profile config "old-k8s-version-019549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 18:31:22.655243   77994 config.go:182] Loaded profile config "pause-371172": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:31:22.655337   77994 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:31:22.691260   77994 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 18:31:22.692498   77994 start.go:297] selected driver: kvm2
	I0717 18:31:22.692523   77994 start.go:901] validating driver "kvm2" against <nil>
	I0717 18:31:22.692538   77994 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:31:22.693340   77994 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:31:22.693419   77994 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:31:22.711624   77994 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:31:22.711696   77994 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 18:31:22.711928   77994 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:31:22.712003   77994 cni.go:84] Creating CNI manager for ""
	I0717 18:31:22.712017   77994 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:31:22.712024   77994 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 18:31:22.712120   77994 start.go:340] cluster config:
	{Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:31:22.712224   77994 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:31:22.713980   77994 out.go:177] * Starting "embed-certs-527415" primary control-plane node in "embed-certs-527415" cluster
	I0717 18:31:21.863073   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:21.863649   76391 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:31:21.863670   76391 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:31:21.863604   76525 retry.go:31] will retry after 5.420544594s: waiting for machine to come up
	I0717 18:31:21.953494   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:24.451856   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:22.715361   77994 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:31:22.715404   77994 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:31:22.715414   77994 cache.go:56] Caching tarball of preloaded images
	I0717 18:31:22.715574   77994 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:31:22.715594   77994 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:31:22.715717   77994 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/config.json ...
	I0717 18:31:22.715742   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/config.json: {Name:mk76bd6ccc31581a1abdd4a4a1a2d8d35752fa92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:22.715892   77994 start.go:360] acquireMachinesLock for embed-certs-527415: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:31:27.288475   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:27.289011   76391 main.go:141] libmachine: (no-preload-066175) Found IP for machine: 192.168.72.216
	I0717 18:31:27.289034   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has current primary IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:27.289041   76391 main.go:141] libmachine: (no-preload-066175) Reserving static IP address...
	I0717 18:31:27.289508   76391 main.go:141] libmachine: (no-preload-066175) DBG | unable to find host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"} in network mk-no-preload-066175
	I0717 18:31:27.369012   76391 main.go:141] libmachine: (no-preload-066175) Reserved static IP address: 192.168.72.216
	I0717 18:31:27.369041   76391 main.go:141] libmachine: (no-preload-066175) DBG | Getting to WaitForSSH function...
	I0717 18:31:27.369050   76391 main.go:141] libmachine: (no-preload-066175) Waiting for SSH to be available...
	I0717 18:31:27.371780   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:27.372105   76391 main.go:141] libmachine: (no-preload-066175) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175
	I0717 18:31:27.372132   76391 main.go:141] libmachine: (no-preload-066175) DBG | unable to find defined IP address of network mk-no-preload-066175 interface with MAC address 52:54:00:72:a5:17
	I0717 18:31:27.372149   76391 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH client type: external
	I0717 18:31:27.372193   76391 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa (-rw-------)
	I0717 18:31:27.372244   76391 main.go:141] libmachine: (no-preload-066175) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:31:27.372261   76391 main.go:141] libmachine: (no-preload-066175) DBG | About to run SSH command:
	I0717 18:31:27.372276   76391 main.go:141] libmachine: (no-preload-066175) DBG | exit 0
	I0717 18:31:27.376589   76391 main.go:141] libmachine: (no-preload-066175) DBG | SSH cmd err, output: exit status 255: 
	I0717 18:31:27.376606   76391 main.go:141] libmachine: (no-preload-066175) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 18:31:27.376614   76391 main.go:141] libmachine: (no-preload-066175) DBG | command : exit 0
	I0717 18:31:27.376623   76391 main.go:141] libmachine: (no-preload-066175) DBG | err     : exit status 255
	I0717 18:31:27.376635   76391 main.go:141] libmachine: (no-preload-066175) DBG | output  : 
	I0717 18:31:26.952582   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:29.452659   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:31.785428   77994 start.go:364] duration metric: took 9.069515129s to acquireMachinesLock for "embed-certs-527415"
	I0717 18:31:31.785493   77994 start.go:93] Provisioning new machine with config: &{Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:31:31.785610   77994 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 18:31:31.787821   77994 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 18:31:31.787997   77994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:31:31.788041   77994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:31:31.805247   77994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37115
	I0717 18:31:31.805669   77994 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:31:31.806215   77994 main.go:141] libmachine: Using API Version  1
	I0717 18:31:31.806239   77994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:31:31.806763   77994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:31:31.806991   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:31:31.807166   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:31.807327   77994 start.go:159] libmachine.API.Create for "embed-certs-527415" (driver="kvm2")
	I0717 18:31:31.807359   77994 client.go:168] LocalClient.Create starting
	I0717 18:31:31.807399   77994 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 18:31:31.807436   77994 main.go:141] libmachine: Decoding PEM data...
	I0717 18:31:31.807457   77994 main.go:141] libmachine: Parsing certificate...
	I0717 18:31:31.807524   77994 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 18:31:31.807548   77994 main.go:141] libmachine: Decoding PEM data...
	I0717 18:31:31.807567   77994 main.go:141] libmachine: Parsing certificate...
	I0717 18:31:31.807590   77994 main.go:141] libmachine: Running pre-create checks...
	I0717 18:31:31.807606   77994 main.go:141] libmachine: (embed-certs-527415) Calling .PreCreateCheck
	I0717 18:31:31.808014   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetConfigRaw
	I0717 18:31:31.808462   77994 main.go:141] libmachine: Creating machine...
	I0717 18:31:31.808479   77994 main.go:141] libmachine: (embed-certs-527415) Calling .Create
	I0717 18:31:31.808624   77994 main.go:141] libmachine: (embed-certs-527415) Creating KVM machine...
	I0717 18:31:31.809897   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found existing default KVM network
	I0717 18:31:31.811352   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:31.811169   78077 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4f:c7:84} reservation:<nil>}
	I0717 18:31:31.812075   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:31.812006   78077 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f3:32:5d} reservation:<nil>}
	I0717 18:31:31.813104   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:31.813019   78077 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a2fd0}
	I0717 18:31:31.813127   77994 main.go:141] libmachine: (embed-certs-527415) DBG | created network xml: 
	I0717 18:31:31.813140   77994 main.go:141] libmachine: (embed-certs-527415) DBG | <network>
	I0717 18:31:31.813149   77994 main.go:141] libmachine: (embed-certs-527415) DBG |   <name>mk-embed-certs-527415</name>
	I0717 18:31:31.813161   77994 main.go:141] libmachine: (embed-certs-527415) DBG |   <dns enable='no'/>
	I0717 18:31:31.813168   77994 main.go:141] libmachine: (embed-certs-527415) DBG |   
	I0717 18:31:31.813184   77994 main.go:141] libmachine: (embed-certs-527415) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0717 18:31:31.813194   77994 main.go:141] libmachine: (embed-certs-527415) DBG |     <dhcp>
	I0717 18:31:31.813221   77994 main.go:141] libmachine: (embed-certs-527415) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0717 18:31:31.813242   77994 main.go:141] libmachine: (embed-certs-527415) DBG |     </dhcp>
	I0717 18:31:31.813252   77994 main.go:141] libmachine: (embed-certs-527415) DBG |   </ip>
	I0717 18:31:31.813263   77994 main.go:141] libmachine: (embed-certs-527415) DBG |   
	I0717 18:31:31.813274   77994 main.go:141] libmachine: (embed-certs-527415) DBG | </network>
	I0717 18:31:31.813283   77994 main.go:141] libmachine: (embed-certs-527415) DBG | 
	I0717 18:31:31.818167   77994 main.go:141] libmachine: (embed-certs-527415) DBG | trying to create private KVM network mk-embed-certs-527415 192.168.61.0/24...
	I0717 18:31:31.890335   77994 main.go:141] libmachine: (embed-certs-527415) DBG | private KVM network mk-embed-certs-527415 192.168.61.0/24 created
	I0717 18:31:31.890370   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:31.890312   78077 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:31:31.890384   77994 main.go:141] libmachine: (embed-certs-527415) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415 ...
	I0717 18:31:31.890400   77994 main.go:141] libmachine: (embed-certs-527415) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 18:31:31.890484   77994 main.go:141] libmachine: (embed-certs-527415) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 18:31:32.148557   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:32.148429   78077 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa...
	I0717 18:31:32.296820   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:32.296676   78077 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/embed-certs-527415.rawdisk...
	I0717 18:31:32.296882   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Writing magic tar header
	I0717 18:31:32.296902   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Writing SSH key tar header
	I0717 18:31:32.296916   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:32.296808   78077 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415 ...
	I0717 18:31:32.296932   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415
	I0717 18:31:32.296971   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 18:31:32.296993   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:31:32.297010   77994 main.go:141] libmachine: (embed-certs-527415) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415 (perms=drwx------)
	I0717 18:31:32.297030   77994 main.go:141] libmachine: (embed-certs-527415) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:31:32.297044   77994 main.go:141] libmachine: (embed-certs-527415) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 18:31:32.297057   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 18:31:32.297067   77994 main.go:141] libmachine: (embed-certs-527415) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 18:31:32.297080   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:31:32.297111   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:31:32.297145   77994 main.go:141] libmachine: (embed-certs-527415) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:31:32.297159   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home
	I0717 18:31:32.297175   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Skipping /home - not owner
	I0717 18:31:32.297204   77994 main.go:141] libmachine: (embed-certs-527415) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:31:32.297220   77994 main.go:141] libmachine: (embed-certs-527415) Creating domain...
	I0717 18:31:32.298269   77994 main.go:141] libmachine: (embed-certs-527415) define libvirt domain using xml: 
	I0717 18:31:32.298285   77994 main.go:141] libmachine: (embed-certs-527415) <domain type='kvm'>
	I0717 18:31:32.298302   77994 main.go:141] libmachine: (embed-certs-527415)   <name>embed-certs-527415</name>
	I0717 18:31:32.298311   77994 main.go:141] libmachine: (embed-certs-527415)   <memory unit='MiB'>2200</memory>
	I0717 18:31:32.298321   77994 main.go:141] libmachine: (embed-certs-527415)   <vcpu>2</vcpu>
	I0717 18:31:32.298332   77994 main.go:141] libmachine: (embed-certs-527415)   <features>
	I0717 18:31:32.298344   77994 main.go:141] libmachine: (embed-certs-527415)     <acpi/>
	I0717 18:31:32.298355   77994 main.go:141] libmachine: (embed-certs-527415)     <apic/>
	I0717 18:31:32.298363   77994 main.go:141] libmachine: (embed-certs-527415)     <pae/>
	I0717 18:31:32.298376   77994 main.go:141] libmachine: (embed-certs-527415)     
	I0717 18:31:32.298420   77994 main.go:141] libmachine: (embed-certs-527415)   </features>
	I0717 18:31:32.298448   77994 main.go:141] libmachine: (embed-certs-527415)   <cpu mode='host-passthrough'>
	I0717 18:31:32.298462   77994 main.go:141] libmachine: (embed-certs-527415)   
	I0717 18:31:32.298474   77994 main.go:141] libmachine: (embed-certs-527415)   </cpu>
	I0717 18:31:32.298486   77994 main.go:141] libmachine: (embed-certs-527415)   <os>
	I0717 18:31:32.298498   77994 main.go:141] libmachine: (embed-certs-527415)     <type>hvm</type>
	I0717 18:31:32.298511   77994 main.go:141] libmachine: (embed-certs-527415)     <boot dev='cdrom'/>
	I0717 18:31:32.298524   77994 main.go:141] libmachine: (embed-certs-527415)     <boot dev='hd'/>
	I0717 18:31:32.298551   77994 main.go:141] libmachine: (embed-certs-527415)     <bootmenu enable='no'/>
	I0717 18:31:32.298576   77994 main.go:141] libmachine: (embed-certs-527415)   </os>
	I0717 18:31:32.298595   77994 main.go:141] libmachine: (embed-certs-527415)   <devices>
	I0717 18:31:32.298614   77994 main.go:141] libmachine: (embed-certs-527415)     <disk type='file' device='cdrom'>
	I0717 18:31:32.298633   77994 main.go:141] libmachine: (embed-certs-527415)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/boot2docker.iso'/>
	I0717 18:31:32.298646   77994 main.go:141] libmachine: (embed-certs-527415)       <target dev='hdc' bus='scsi'/>
	I0717 18:31:32.298660   77994 main.go:141] libmachine: (embed-certs-527415)       <readonly/>
	I0717 18:31:32.298688   77994 main.go:141] libmachine: (embed-certs-527415)     </disk>
	I0717 18:31:32.298709   77994 main.go:141] libmachine: (embed-certs-527415)     <disk type='file' device='disk'>
	I0717 18:31:32.298726   77994 main.go:141] libmachine: (embed-certs-527415)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:31:32.298741   77994 main.go:141] libmachine: (embed-certs-527415)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/embed-certs-527415.rawdisk'/>
	I0717 18:31:32.298754   77994 main.go:141] libmachine: (embed-certs-527415)       <target dev='hda' bus='virtio'/>
	I0717 18:31:32.298767   77994 main.go:141] libmachine: (embed-certs-527415)     </disk>
	I0717 18:31:32.298778   77994 main.go:141] libmachine: (embed-certs-527415)     <interface type='network'>
	I0717 18:31:32.298796   77994 main.go:141] libmachine: (embed-certs-527415)       <source network='mk-embed-certs-527415'/>
	I0717 18:31:32.298810   77994 main.go:141] libmachine: (embed-certs-527415)       <model type='virtio'/>
	I0717 18:31:32.298822   77994 main.go:141] libmachine: (embed-certs-527415)     </interface>
	I0717 18:31:32.298836   77994 main.go:141] libmachine: (embed-certs-527415)     <interface type='network'>
	I0717 18:31:32.298846   77994 main.go:141] libmachine: (embed-certs-527415)       <source network='default'/>
	I0717 18:31:32.298866   77994 main.go:141] libmachine: (embed-certs-527415)       <model type='virtio'/>
	I0717 18:31:32.298885   77994 main.go:141] libmachine: (embed-certs-527415)     </interface>
	I0717 18:31:32.298899   77994 main.go:141] libmachine: (embed-certs-527415)     <serial type='pty'>
	I0717 18:31:32.298911   77994 main.go:141] libmachine: (embed-certs-527415)       <target port='0'/>
	I0717 18:31:32.298924   77994 main.go:141] libmachine: (embed-certs-527415)     </serial>
	I0717 18:31:32.298941   77994 main.go:141] libmachine: (embed-certs-527415)     <console type='pty'>
	I0717 18:31:32.298972   77994 main.go:141] libmachine: (embed-certs-527415)       <target type='serial' port='0'/>
	I0717 18:31:32.298994   77994 main.go:141] libmachine: (embed-certs-527415)     </console>
	I0717 18:31:32.299007   77994 main.go:141] libmachine: (embed-certs-527415)     <rng model='virtio'>
	I0717 18:31:32.299019   77994 main.go:141] libmachine: (embed-certs-527415)       <backend model='random'>/dev/random</backend>
	I0717 18:31:32.299039   77994 main.go:141] libmachine: (embed-certs-527415)     </rng>
	I0717 18:31:32.299058   77994 main.go:141] libmachine: (embed-certs-527415)     
	I0717 18:31:32.299077   77994 main.go:141] libmachine: (embed-certs-527415)     
	I0717 18:31:32.299093   77994 main.go:141] libmachine: (embed-certs-527415)   </devices>
	I0717 18:31:32.299104   77994 main.go:141] libmachine: (embed-certs-527415) </domain>
	I0717 18:31:32.299113   77994 main.go:141] libmachine: (embed-certs-527415) 
	I0717 18:31:32.303768   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:b7:0f:9b in network default
	I0717 18:31:32.304404   77994 main.go:141] libmachine: (embed-certs-527415) Ensuring networks are active...
	I0717 18:31:32.304423   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:32.305118   77994 main.go:141] libmachine: (embed-certs-527415) Ensuring network default is active
	I0717 18:31:32.305479   77994 main.go:141] libmachine: (embed-certs-527415) Ensuring network mk-embed-certs-527415 is active
	I0717 18:31:32.306020   77994 main.go:141] libmachine: (embed-certs-527415) Getting domain xml...
	I0717 18:31:32.306702   77994 main.go:141] libmachine: (embed-certs-527415) Creating domain...
	I0717 18:31:30.378080   76391 main.go:141] libmachine: (no-preload-066175) DBG | Getting to WaitForSSH function...
	I0717 18:31:30.381087   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.381517   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.381540   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.381651   76391 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH client type: external
	I0717 18:31:30.381676   76391 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa (-rw-------)
	I0717 18:31:30.381712   76391 main.go:141] libmachine: (no-preload-066175) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:31:30.381731   76391 main.go:141] libmachine: (no-preload-066175) DBG | About to run SSH command:
	I0717 18:31:30.381752   76391 main.go:141] libmachine: (no-preload-066175) DBG | exit 0
	I0717 18:31:30.509436   76391 main.go:141] libmachine: (no-preload-066175) DBG | SSH cmd err, output: <nil>: 
	I0717 18:31:30.509704   76391 main.go:141] libmachine: (no-preload-066175) KVM machine creation complete!
	I0717 18:31:30.510079   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetConfigRaw
	I0717 18:31:30.510684   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:30.510894   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:30.511044   76391 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:31:30.511059   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:31:30.512486   76391 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:31:30.512510   76391 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:31:30.512518   76391 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:31:30.512526   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:30.514844   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.515158   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.515209   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.515304   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:30.515476   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.515626   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.515769   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:30.515948   76391 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:30.516136   76391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:31:30.516146   76391 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:31:30.620056   76391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:31:30.620086   76391 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:31:30.620097   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:30.623128   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.623464   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.623492   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.623614   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:30.623804   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.623963   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.624081   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:30.624233   76391 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:30.624441   76391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:31:30.624455   76391 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:31:30.725315   76391 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:31:30.725443   76391 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:31:30.725460   76391 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:31:30.725471   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:31:30.725748   76391 buildroot.go:166] provisioning hostname "no-preload-066175"
	I0717 18:31:30.725779   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:31:30.725989   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:30.728433   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.728879   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.728912   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.729094   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:30.729263   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.729426   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.729560   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:30.729718   76391 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:30.729980   76391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:31:30.729998   76391 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-066175 && echo "no-preload-066175" | sudo tee /etc/hostname
	I0717 18:31:30.846184   76391 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-066175
	
	I0717 18:31:30.846217   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:30.849259   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.849550   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.849588   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.849752   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:30.849920   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.850083   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.850225   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:30.850401   76391 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:30.850556   76391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:31:30.850573   76391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-066175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-066175/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-066175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:31:30.961590   76391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:31:30.961620   76391 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:31:30.961675   76391 buildroot.go:174] setting up certificates
	I0717 18:31:30.961690   76391 provision.go:84] configureAuth start
	I0717 18:31:30.961710   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:31:30.962027   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:31:30.964583   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.964991   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.965026   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.965165   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:30.967244   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.967701   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.967723   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.967915   76391 provision.go:143] copyHostCerts
	I0717 18:31:30.967989   76391 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:31:30.968001   76391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:31:30.968057   76391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:31:30.968147   76391 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:31:30.968155   76391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:31:30.968176   76391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:31:30.968238   76391 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:31:30.968245   76391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:31:30.968261   76391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:31:30.968317   76391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.no-preload-066175 san=[127.0.0.1 192.168.72.216 localhost minikube no-preload-066175]
	I0717 18:31:31.143419   76391 provision.go:177] copyRemoteCerts
	I0717 18:31:31.143473   76391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:31:31.143495   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:31.146046   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.146368   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.146391   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.146657   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:31.146862   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.147028   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:31.147173   76391 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:31:31.226668   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 18:31:31.248332   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:31:31.269415   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:31:31.290074   76391 provision.go:87] duration metric: took 328.36699ms to configureAuth
	I0717 18:31:31.290100   76391 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:31:31.290253   76391 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:31:31.290332   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:31.293271   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.293624   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.293655   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.293795   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:31.293946   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.294100   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.294210   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:31.294359   76391 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:31.294536   76391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:31:31.294557   76391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:31:31.553507   76391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:31:31.553536   76391 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:31:31.553546   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetURL
	I0717 18:31:31.554736   76391 main.go:141] libmachine: (no-preload-066175) DBG | Using libvirt version 6000000
	I0717 18:31:31.557056   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.557387   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.557417   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.557578   76391 main.go:141] libmachine: Docker is up and running!
	I0717 18:31:31.557594   76391 main.go:141] libmachine: Reticulating splines...
	I0717 18:31:31.557602   76391 client.go:171] duration metric: took 27.982696356s to LocalClient.Create
	I0717 18:31:31.557639   76391 start.go:167] duration metric: took 27.982768994s to libmachine.API.Create "no-preload-066175"
	I0717 18:31:31.557648   76391 start.go:293] postStartSetup for "no-preload-066175" (driver="kvm2")
	I0717 18:31:31.557663   76391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:31:31.557686   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:31.557925   76391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:31:31.557945   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:31.560136   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.560489   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.560518   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.560656   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:31.560870   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.561030   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:31.561147   76391 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:31:31.642798   76391 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:31:31.646461   76391 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:31:31.646482   76391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:31:31.646552   76391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:31:31.646641   76391 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:31:31.646748   76391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:31:31.655092   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:31:31.676712   76391 start.go:296] duration metric: took 119.050486ms for postStartSetup
	I0717 18:31:31.676757   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetConfigRaw
	I0717 18:31:31.677369   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:31:31.679689   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.679993   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.680022   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.680278   76391 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/config.json ...
	I0717 18:31:31.680472   76391 start.go:128] duration metric: took 28.126495252s to createHost
	I0717 18:31:31.680495   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:31.682709   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.683016   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.683037   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.683146   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:31.683412   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.683625   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.683827   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:31.684040   76391 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:31.684202   76391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:31:31.684214   76391 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:31:31.785298   76391 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241091.773532814
	
	I0717 18:31:31.785315   76391 fix.go:216] guest clock: 1721241091.773532814
	I0717 18:31:31.785322   76391 fix.go:229] Guest: 2024-07-17 18:31:31.773532814 +0000 UTC Remote: 2024-07-17 18:31:31.680483267 +0000 UTC m=+37.507086707 (delta=93.049547ms)
	I0717 18:31:31.785340   76391 fix.go:200] guest clock delta is within tolerance: 93.049547ms
	I0717 18:31:31.785345   76391 start.go:83] releasing machines lock for "no-preload-066175", held for 28.23152162s
	I0717 18:31:31.785377   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:31.785674   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:31:31.788670   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.789059   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.789085   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.789279   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:31.789779   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:31.789980   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:31.790065   76391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:31:31.790112   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:31.790315   76391 ssh_runner.go:195] Run: cat /version.json
	I0717 18:31:31.790344   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:31.792870   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.793115   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.793325   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.793352   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.793470   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:31.793591   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.793613   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.793647   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.793773   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:31.793818   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:31.793939   76391 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:31:31.794015   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.794152   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:31.794306   76391 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:31:31.869672   76391 ssh_runner.go:195] Run: systemctl --version
	I0717 18:31:31.905274   76391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:31:32.072627   76391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:31:32.078233   76391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:31:32.078301   76391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:31:32.092822   76391 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:31:32.092847   76391 start.go:495] detecting cgroup driver to use...
	I0717 18:31:32.092911   76391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:31:32.108053   76391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:31:32.122303   76391 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:31:32.122369   76391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:31:32.136321   76391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:31:32.150254   76391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:31:32.273363   76391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:31:32.422162   76391 docker.go:233] disabling docker service ...
	I0717 18:31:32.422221   76391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:31:32.436118   76391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:31:32.448832   76391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:31:32.585000   76391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:31:32.708483   76391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:31:32.724100   76391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:31:32.740515   76391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 18:31:32.740590   76391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.753527   76391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:31:32.753586   76391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.765797   76391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.775331   76391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.785046   76391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:31:32.794885   76391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.804604   76391 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.820620   76391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.830014   76391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:31:32.839851   76391 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:31:32.839893   76391 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:31:32.853080   76391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:31:32.862938   76391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:31:32.995893   76391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:31:33.137303   76391 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:31:33.137370   76391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:31:33.142293   76391 start.go:563] Will wait 60s for crictl version
	I0717 18:31:33.142339   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.145670   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:31:33.181362   76391 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:31:33.181435   76391 ssh_runner.go:195] Run: crio --version
	I0717 18:31:33.209245   76391 ssh_runner.go:195] Run: crio --version
	I0717 18:31:33.237648   76391 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 18:31:33.238943   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:31:33.242151   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:33.242633   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:33.242669   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:33.242985   76391 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 18:31:33.246924   76391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:31:33.259634   76391 kubeadm.go:883] updating cluster {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:31:33.259733   76391 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 18:31:33.259769   76391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:31:33.293987   76391 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 18:31:33.294011   76391 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:31:33.294070   76391 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:31:33.294089   76391 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:31:33.294150   76391 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 18:31:33.294171   76391 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:31:33.294097   76391 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:31:33.294070   76391 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:31:33.294070   76391 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:31:33.294096   76391 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:31:33.295633   76391 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:31:33.295687   76391 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:31:33.295692   76391 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:31:33.295695   76391 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:31:33.295635   76391 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 18:31:33.295644   76391 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:31:33.295695   76391 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:31:33.295633   76391 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:31:33.477115   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:31:33.512387   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 18:31:33.515338   76391 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 18:31:33.515385   76391 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:31:33.515429   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.516497   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:31:33.526652   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:31:33.531476   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:31:33.544357   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 18:31:33.574814   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:31:33.578483   76391 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 18:31:33.578531   76391 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:31:33.578540   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:31:33.578585   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.638901   76391 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 18:31:33.638946   76391 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:31:33.638997   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.658595   76391 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 18:31:33.658643   76391 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:31:33.658694   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.683215   76391 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 18:31:33.683261   76391 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:31:33.683313   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.683221   76391 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0717 18:31:33.683378   76391 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I0717 18:31:33.683429   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.695172   76391 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 18:31:33.695216   76391 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:31:33.695262   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.696647   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 18:31:33.696735   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:31:33.696737   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 18:31:33.696782   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:31:33.696783   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:31:33.700105   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:31:33.700117   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0717 18:31:33.701036   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:31:33.716716   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0': No such file or directory
	I0717 18:31:33.716754   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (27889152 bytes)
	I0717 18:31:33.851087   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 18:31:33.851202   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:31:33.860422   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 18:31:33.860472   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 18:31:33.860526   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:31:33.860568   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:31:33.860629   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0717 18:31:33.860623   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 18:31:33.860672   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 18:31:33.860689   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.10
	I0717 18:31:33.860719   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:31:33.860729   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:31:33.894844   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.14-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.14-0': No such file or directory
	I0717 18:31:33.894894   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 --> /var/lib/minikube/images/etcd_3.5.14-0 (56932864 bytes)
	I0717 18:31:33.898993   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0717 18:31:33.899031   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0717 18:31:33.899033   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0': No such file or directory
	I0717 18:31:33.899037   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I0717 18:31:33.899088   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I0717 18:31:33.899064   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (20081152 bytes)
	I0717 18:31:33.899152   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.31.0-beta.0': No such file or directory
	I0717 18:31:33.899176   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (30186496 bytes)
	I0717 18:31:33.899188   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0': No such file or directory
	I0717 18:31:33.899214   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (26149888 bytes)
	I0717 18:31:34.005437   76391 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10
	I0717 18:31:34.005498   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10
	I0717 18:31:34.101340   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:31:31.955496   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:34.452829   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:33.589246   77994 main.go:141] libmachine: (embed-certs-527415) Waiting to get IP...
	I0717 18:31:33.590252   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:33.590812   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:33.590839   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:33.590791   78077 retry.go:31] will retry after 212.1232ms: waiting for machine to come up
	I0717 18:31:33.804446   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:33.805108   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:33.805141   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:33.805038   78077 retry.go:31] will retry after 329.640925ms: waiting for machine to come up
	I0717 18:31:34.136730   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:34.137459   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:34.137485   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:34.137398   78077 retry.go:31] will retry after 474.208397ms: waiting for machine to come up
	I0717 18:31:34.613070   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:34.613555   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:34.613589   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:34.613507   78077 retry.go:31] will retry after 480.946138ms: waiting for machine to come up
	I0717 18:31:35.096126   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:35.096758   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:35.096787   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:35.096706   78077 retry.go:31] will retry after 619.792149ms: waiting for machine to come up
	I0717 18:31:35.718511   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:35.719154   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:35.719183   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:35.719105   78077 retry.go:31] will retry after 617.83695ms: waiting for machine to come up
	I0717 18:31:36.339089   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:36.339551   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:36.339577   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:36.339504   78077 retry.go:31] will retry after 1.119290876s: waiting for machine to come up
	I0717 18:31:37.460583   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:37.461228   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:37.461256   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:37.461178   78077 retry.go:31] will retry after 1.078022658s: waiting for machine to come up
	I0717 18:31:34.764584   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0717 18:31:34.764627   76391 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:31:34.764677   76391 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 18:31:34.764723   76391 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:31:34.764767   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:34.764684   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:31:37.440119   76391 ssh_runner.go:235] Completed: which crictl: (2.675324301s)
	I0717 18:31:37.440199   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:31:37.440212   76391 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.675403717s)
	I0717 18:31:37.440234   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 18:31:37.440263   76391 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:31:37.440332   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:31:36.454130   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:38.454403   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:38.540880   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:38.541390   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:38.541413   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:38.541299   78077 retry.go:31] will retry after 1.425823371s: waiting for machine to come up
	I0717 18:31:39.968956   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:39.969608   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:39.969654   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:39.969555   78077 retry.go:31] will retry after 2.03401538s: waiting for machine to come up
	I0717 18:31:42.005548   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:42.006145   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:42.006186   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:42.006097   78077 retry.go:31] will retry after 2.798937612s: waiting for machine to come up
	I0717 18:31:39.409448   76391 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.969219545s)
	I0717 18:31:39.409478   76391 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.96912201s)
	I0717 18:31:39.409502   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 18:31:39.409529   76391 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:31:39.409583   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:31:39.409503   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 18:31:39.409686   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:31:41.372476   76391 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.962762593s)
	I0717 18:31:41.372520   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0717 18:31:41.372535   76391 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.962924114s)
	I0717 18:31:41.372549   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 18:31:41.372548   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0717 18:31:41.372584   76391 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:31:41.372659   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:31:43.269851   76391 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.89716244s)
	I0717 18:31:43.269883   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 18:31:43.269910   76391 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:31:43.269986   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:31:40.955183   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:43.451812   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:45.452884   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:44.808105   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:44.808594   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:44.808616   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:44.808574   78077 retry.go:31] will retry after 2.417317368s: waiting for machine to come up
	I0717 18:31:47.227937   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:47.228407   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:47.228427   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:47.228378   78077 retry.go:31] will retry after 4.217313619s: waiting for machine to come up
	I0717 18:31:45.241544   76391 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.971531191s)
	I0717 18:31:45.241572   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 18:31:45.241608   76391 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:31:45.241673   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:31:48.409933   76391 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.168231143s)
	I0717 18:31:48.409964   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 18:31:48.410000   76391 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:31:48.410071   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:31:49.066543   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 18:31:49.066589   76391 cache_images.go:123] Successfully loaded all cached images
	I0717 18:31:49.066601   76391 cache_images.go:92] duration metric: took 15.772574999s to LoadCachedImages
	I0717 18:31:49.066615   76391 kubeadm.go:934] updating node { 192.168.72.216 8443 v1.31.0-beta.0 crio true true} ...
	I0717 18:31:49.066740   76391 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-066175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:31:49.066801   76391 ssh_runner.go:195] Run: crio config
	I0717 18:31:49.114337   76391 cni.go:84] Creating CNI manager for ""
	I0717 18:31:49.114361   76391 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:31:49.114374   76391 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:31:49.114409   76391 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.216 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-066175 NodeName:no-preload-066175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:31:49.114568   76391 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-066175"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:31:49.114642   76391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 18:31:49.124651   76391 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0-beta.0': No such file or directory
	
	Initiating transfer...
	I0717 18:31:49.124706   76391 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 18:31:49.133972   76391 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256
	I0717 18:31:49.134057   76391 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubelet
	I0717 18:31:49.134101   76391 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubeadm
	I0717 18:31:49.134065   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl
	I0717 18:31:49.138829   76391 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0-beta.0/kubectl': No such file or directory
	I0717 18:31:49.138853   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl (56209560 bytes)
	I0717 18:31:47.951981   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:49.953069   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:51.450034   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.450725   77994 main.go:141] libmachine: (embed-certs-527415) Found IP for machine: 192.168.61.90
	I0717 18:31:51.450755   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has current primary IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.450761   77994 main.go:141] libmachine: (embed-certs-527415) Reserving static IP address...
	I0717 18:31:51.451197   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"} in network mk-embed-certs-527415
	I0717 18:31:51.523934   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Getting to WaitForSSH function...
	I0717 18:31:51.523969   77994 main.go:141] libmachine: (embed-certs-527415) Reserved static IP address: 192.168.61.90
	I0717 18:31:51.524009   77994 main.go:141] libmachine: (embed-certs-527415) Waiting for SSH to be available...
	I0717 18:31:51.526885   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.527351   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:51.527381   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.527540   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH client type: external
	I0717 18:31:51.527564   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa (-rw-------)
	I0717 18:31:51.527598   77994 main.go:141] libmachine: (embed-certs-527415) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:31:51.527612   77994 main.go:141] libmachine: (embed-certs-527415) DBG | About to run SSH command:
	I0717 18:31:51.527625   77994 main.go:141] libmachine: (embed-certs-527415) DBG | exit 0
	I0717 18:31:51.656746   77994 main.go:141] libmachine: (embed-certs-527415) DBG | SSH cmd err, output: <nil>: 
	I0717 18:31:51.657034   77994 main.go:141] libmachine: (embed-certs-527415) KVM machine creation complete!
	I0717 18:31:51.657367   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetConfigRaw
	I0717 18:31:51.657882   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:51.658124   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:51.658283   77994 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:31:51.658300   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:31:51.659706   77994 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:31:51.659722   77994 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:31:51.659729   77994 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:31:51.659738   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:51.661978   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.662282   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:51.662309   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.662414   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:51.662596   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:51.662734   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:51.662877   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:51.663040   77994 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:51.663259   77994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:31:51.663270   77994 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:31:51.775852   77994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:31:51.775881   77994 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:31:51.775892   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:51.778538   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.778987   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:51.779011   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.779222   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:51.779428   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:51.779657   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:51.779808   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:51.779974   77994 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:51.780153   77994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:31:51.780166   77994 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:31:51.889084   77994 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:31:51.889175   77994 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:31:51.889191   77994 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:31:51.889201   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:31:51.889456   77994 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:31:51.889478   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:31:51.889696   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:51.892515   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.892901   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:51.892927   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.893105   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:51.893297   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:51.893473   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:51.893595   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:51.893738   77994 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:51.893915   77994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:31:51.893931   77994 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-527415 && echo "embed-certs-527415" | sudo tee /etc/hostname
	I0717 18:31:52.019955   77994 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-527415
	
	I0717 18:31:52.019982   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.023120   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.023422   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.023448   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.023633   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.023934   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.024106   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.024247   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.024397   77994 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:52.024570   77994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:31:52.024592   77994 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-527415' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-527415/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-527415' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:31:52.141225   77994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:31:52.141255   77994 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:31:52.141306   77994 buildroot.go:174] setting up certificates
	I0717 18:31:52.141330   77994 provision.go:84] configureAuth start
	I0717 18:31:52.141347   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:31:52.141628   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:31:52.144442   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.144763   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.144791   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.144935   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.147182   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.147550   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.147589   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.147748   77994 provision.go:143] copyHostCerts
	I0717 18:31:52.147806   77994 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:31:52.147817   77994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:31:52.147866   77994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:31:52.147955   77994 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:31:52.147963   77994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:31:52.147984   77994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:31:52.148057   77994 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:31:52.148064   77994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:31:52.148086   77994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:31:52.148141   77994 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.embed-certs-527415 san=[127.0.0.1 192.168.61.90 embed-certs-527415 localhost minikube]
	I0717 18:31:52.252587   77994 provision.go:177] copyRemoteCerts
	I0717 18:31:52.252660   77994 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:31:52.252689   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.255106   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.255484   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.255518   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.255761   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.255952   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.256129   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.256298   77994 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:31:52.342533   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:31:52.367027   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:31:52.390985   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 18:31:52.412089   77994 provision.go:87] duration metric: took 270.743656ms to configureAuth
	I0717 18:31:52.412129   77994 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:31:52.412308   77994 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:31:52.412412   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.415290   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.415645   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.415671   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.415836   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.416018   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.416176   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.416294   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.416496   77994 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:52.416689   77994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:31:52.416707   77994 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:31:50.157551   76391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:31:50.172158   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0-beta.0/kubelet
	I0717 18:31:50.176457   76391 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0-beta.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet': No such file or directory
	I0717 18:31:50.176496   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.31.0-beta.0/kubelet (76643576 bytes)
	I0717 18:31:53.717739   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0-beta.0/kubeadm
	I0717 18:31:53.722817   76391 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0-beta.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0-beta.0/kubeadm': No such file or directory
	I0717 18:31:53.722860   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0-beta.0/kubeadm (58110104 bytes)
	I0717 18:31:53.964050   76391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:31:53.975154   76391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 18:31:53.992873   76391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 18:31:54.015018   76391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 18:31:54.035446   76391 ssh_runner.go:195] Run: grep 192.168.72.216	control-plane.minikube.internal$ /etc/hosts
	I0717 18:31:54.039709   76391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:31:54.052721   76391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:31:54.167697   76391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:31:54.183483   76391 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175 for IP: 192.168.72.216
	I0717 18:31:54.183504   76391 certs.go:194] generating shared ca certs ...
	I0717 18:31:54.183519   76391 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.183653   76391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:31:54.183717   76391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:31:54.183731   76391 certs.go:256] generating profile certs ...
	I0717 18:31:54.183795   76391 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.key
	I0717 18:31:54.183811   76391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.crt with IP's: []
	I0717 18:31:52.673263   77994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:31:52.673302   77994 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:31:52.673314   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetURL
	I0717 18:31:52.674791   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Using libvirt version 6000000
	I0717 18:31:52.677282   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.677737   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.677764   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.677878   77994 main.go:141] libmachine: Docker is up and running!
	I0717 18:31:52.677899   77994 main.go:141] libmachine: Reticulating splines...
	I0717 18:31:52.677908   77994 client.go:171] duration metric: took 20.870538459s to LocalClient.Create
	I0717 18:31:52.677943   77994 start.go:167] duration metric: took 20.870616s to libmachine.API.Create "embed-certs-527415"
	I0717 18:31:52.677956   77994 start.go:293] postStartSetup for "embed-certs-527415" (driver="kvm2")
	I0717 18:31:52.677974   77994 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:31:52.677991   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:52.678242   77994 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:31:52.678266   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.680248   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.680563   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.680597   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.680714   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.680879   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.681101   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.681232   77994 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:31:52.766289   77994 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:31:52.770069   77994 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:31:52.770086   77994 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:31:52.770146   77994 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:31:52.770223   77994 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:31:52.770321   77994 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:31:52.779112   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:31:52.801280   77994 start.go:296] duration metric: took 123.306555ms for postStartSetup
	I0717 18:31:52.801328   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetConfigRaw
	I0717 18:31:52.801941   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:31:52.804815   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.805160   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.805188   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.805412   77994 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/config.json ...
	I0717 18:31:52.805589   77994 start.go:128] duration metric: took 21.019966577s to createHost
	I0717 18:31:52.805616   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.807940   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.808405   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.808432   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.808545   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.808721   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.808882   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.809047   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.809195   77994 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:52.809362   77994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:31:52.809375   77994 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:31:52.921449   77994 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241112.893868317
	
	I0717 18:31:52.921468   77994 fix.go:216] guest clock: 1721241112.893868317
	I0717 18:31:52.921474   77994 fix.go:229] Guest: 2024-07-17 18:31:52.893868317 +0000 UTC Remote: 2024-07-17 18:31:52.805601992 +0000 UTC m=+30.199766249 (delta=88.266325ms)
	I0717 18:31:52.921494   77994 fix.go:200] guest clock delta is within tolerance: 88.266325ms
	I0717 18:31:52.921499   77994 start.go:83] releasing machines lock for "embed-certs-527415", held for 21.136037487s
	I0717 18:31:52.921517   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:52.921781   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:31:52.925132   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.925493   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.925519   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.925686   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:52.926244   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:52.926419   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:52.926533   77994 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:31:52.926579   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.926656   77994 ssh_runner.go:195] Run: cat /version.json
	I0717 18:31:52.926681   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.929807   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.929970   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.930168   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.930193   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.930365   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.930444   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.930471   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.930528   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.930685   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.930709   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.930840   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.930843   77994 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:31:52.931018   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.931154   77994 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:31:53.018875   77994 ssh_runner.go:195] Run: systemctl --version
	I0717 18:31:53.073618   77994 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:31:53.233683   77994 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:31:53.239402   77994 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:31:53.239458   77994 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:31:53.254745   77994 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:31:53.254768   77994 start.go:495] detecting cgroup driver to use...
	I0717 18:31:53.254852   77994 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:31:53.272129   77994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:31:53.284751   77994 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:31:53.284817   77994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:31:53.297287   77994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:31:53.310096   77994 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:31:53.418973   77994 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:31:53.569347   77994 docker.go:233] disabling docker service ...
	I0717 18:31:53.569424   77994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:31:53.584075   77994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:31:53.597553   77994 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:31:53.731390   77994 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:31:53.876960   77994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:31:53.895684   77994 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:31:53.921498   77994 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:31:53.921594   77994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:53.936665   77994 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:31:53.936739   77994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:53.949134   77994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:53.963753   77994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:53.975742   77994 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:31:53.987864   77994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:53.999149   77994 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:54.015311   77994 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:54.026099   77994 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:31:54.038188   77994 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:31:54.038239   77994 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:31:54.051132   77994 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:31:54.060875   77994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:31:54.178755   77994 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:31:54.580916   77994 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:31:54.581013   77994 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:31:54.585301   77994 start.go:563] Will wait 60s for crictl version
	I0717 18:31:54.585380   77994 ssh_runner.go:195] Run: which crictl
	I0717 18:31:54.588602   77994 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:31:54.625278   77994 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:31:54.625383   77994 ssh_runner.go:195] Run: crio --version
	I0717 18:31:54.660653   77994 ssh_runner.go:195] Run: crio --version
	I0717 18:31:54.696465   77994 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:31:54.268690   76391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.crt ...
	I0717 18:31:54.268717   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.crt: {Name:mkfc9a3fc73901f167d875c68badb009bba3473b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.268871   76391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.key ...
	I0717 18:31:54.268881   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.key: {Name:mka80e83b4f4aa4e9c199cede9b7f4aabb9280fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.268980   76391 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672
	I0717 18:31:54.268996   76391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt.78182672 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.216]
	I0717 18:31:54.434876   76391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt.78182672 ...
	I0717 18:31:54.434912   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt.78182672: {Name:mkc2c17201e99e2c605fdbca03d523d337a6eca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.435102   76391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672 ...
	I0717 18:31:54.435121   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672: {Name:mka7c3ef9777ecc269f3e41d6f06196449dd9e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.435229   76391 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt.78182672 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt
	I0717 18:31:54.435328   76391 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key
	I0717 18:31:54.435385   76391 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key
	I0717 18:31:54.435401   76391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt with IP's: []
	I0717 18:31:54.616605   76391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt ...
	I0717 18:31:54.616631   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt: {Name:mkaf0bc2dc76758834e2d1fce1784f41f5568c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.616791   76391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key ...
	I0717 18:31:54.616806   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key: {Name:mkee57f65eb7326dd47875723dc35812e3877809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.616991   76391 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:31:54.617023   76391 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:31:54.617030   76391 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:31:54.617051   76391 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:31:54.617073   76391 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:31:54.617101   76391 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:31:54.617144   76391 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:31:54.617791   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:31:54.648238   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:31:54.676253   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:31:54.702785   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:31:54.725238   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:31:54.748069   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:31:54.777237   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:31:54.800606   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:31:54.824913   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:31:54.847780   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:31:54.873257   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:31:54.907359   76391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:31:54.932656   76391 ssh_runner.go:195] Run: openssl version
	I0717 18:31:54.940667   76391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:31:54.955926   76391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:31:54.960974   76391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:31:54.961033   76391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:31:54.968406   76391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:31:54.982484   76391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:31:54.996890   76391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:31:55.004745   76391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:31:55.004813   76391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:31:55.012014   76391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:31:55.025057   76391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:31:55.038976   76391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:55.045874   76391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:55.045938   76391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:55.053668   76391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:31:55.068421   76391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:31:55.072888   76391 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:31:55.072960   76391 kubeadm.go:392] StartCluster: {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:31:55.073055   76391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:31:55.073111   76391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:31:55.123582   76391 cri.go:89] found id: ""
	I0717 18:31:55.123695   76391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:31:55.138646   76391 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:31:55.151104   76391 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:31:55.162351   76391 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:31:55.162375   76391 kubeadm.go:157] found existing configuration files:
	
	I0717 18:31:55.162428   76391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:31:55.173765   76391 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:31:55.173827   76391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:31:55.189405   76391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:31:55.204438   76391 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:31:55.204513   76391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:31:55.216112   76391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:31:55.229982   76391 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:31:55.230033   76391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:31:55.243597   76391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:31:55.256553   76391 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:31:55.256625   76391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:31:55.269573   76391 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:31:55.331158   76391 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 18:31:55.331556   76391 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:31:55.445321   76391 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:31:55.445462   76391 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:31:55.445606   76391 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 18:31:55.468599   76391 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:31:52.454284   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:54.954746   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:54.697918   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:31:54.700782   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:54.701202   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:54.701231   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:54.701409   77994 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 18:31:54.705863   77994 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:31:54.718108   77994 kubeadm.go:883] updating cluster {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:31:54.718282   77994 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:31:54.718362   77994 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:31:54.751153   77994 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:31:54.751227   77994 ssh_runner.go:195] Run: which lz4
	I0717 18:31:54.756244   77994 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:31:54.761463   77994 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:31:54.761488   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:31:56.014749   77994 crio.go:462] duration metric: took 1.258525232s to copy over tarball
	I0717 18:31:56.014875   77994 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:31:55.470772   76391 out.go:204]   - Generating certificates and keys ...
	I0717 18:31:55.470883   76391 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:31:55.470985   76391 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:31:55.590001   76391 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:31:55.820801   76391 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:31:55.938963   76391 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:31:56.112630   76391 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 18:31:56.239675   76391 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 18:31:56.239814   76391 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-066175] and IPs [192.168.72.216 127.0.0.1 ::1]
	I0717 18:31:56.375120   76391 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 18:31:56.375506   76391 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-066175] and IPs [192.168.72.216 127.0.0.1 ::1]
	I0717 18:31:56.600019   76391 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:31:56.718280   76391 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:31:56.913309   76391 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 18:31:56.913402   76391 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:31:57.020178   76391 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:31:57.131272   76391 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:31:57.736863   76391 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:31:57.958126   76391 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:31:58.047292   76391 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:31:58.048051   76391 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:31:58.051183   76391 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:31:58.053328   76391 out.go:204]   - Booting up control plane ...
	I0717 18:31:58.053461   76391 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:31:58.053565   76391 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:31:58.053672   76391 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:31:58.075519   76391 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:31:58.084553   76391 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:31:58.084634   76391 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:31:58.235800   76391 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:31:58.235921   76391 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:31:58.741075   76391 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.409445ms
	I0717 18:31:58.741227   76391 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:31:58.120843   77994 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.105923139s)
	I0717 18:31:58.120866   77994 crio.go:469] duration metric: took 2.106083712s to extract the tarball
	I0717 18:31:58.120873   77994 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:31:58.156367   77994 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:31:58.200921   77994 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:31:58.200955   77994 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:31:58.200965   77994 kubeadm.go:934] updating node { 192.168.61.90 8443 v1.30.2 crio true true} ...
	I0717 18:31:58.201090   77994 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-527415 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:31:58.201163   77994 ssh_runner.go:195] Run: crio config
	I0717 18:31:58.252221   77994 cni.go:84] Creating CNI manager for ""
	I0717 18:31:58.252243   77994 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:31:58.252258   77994 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:31:58.252277   77994 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.90 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-527415 NodeName:embed-certs-527415 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:31:58.252415   77994 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-527415"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:31:58.252475   77994 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:31:58.264998   77994 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:31:58.265066   77994 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:31:58.275284   77994 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0717 18:31:58.292501   77994 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:31:58.308586   77994 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0717 18:31:58.324035   77994 ssh_runner.go:195] Run: grep 192.168.61.90	control-plane.minikube.internal$ /etc/hosts
	I0717 18:31:58.327675   77994 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:31:58.340285   77994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:31:58.455213   77994 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:31:58.471042   77994 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415 for IP: 192.168.61.90
	I0717 18:31:58.471067   77994 certs.go:194] generating shared ca certs ...
	I0717 18:31:58.471097   77994 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.471320   77994 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:31:58.471399   77994 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:31:58.471415   77994 certs.go:256] generating profile certs ...
	I0717 18:31:58.471508   77994 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.key
	I0717 18:31:58.471529   77994 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.crt with IP's: []
	I0717 18:31:58.693854   77994 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.crt ...
	I0717 18:31:58.693888   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.crt: {Name:mka8c970e93bdd8111ff40dffa7f77a2c03e5f9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.694083   77994 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.key ...
	I0717 18:31:58.694097   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.key: {Name:mk4459e338073cbe85f92b5e828eb8dad95c724a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.694196   77994 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9
	I0717 18:31:58.694211   77994 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt.f26848e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.90]
	I0717 18:31:58.773256   77994 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt.f26848e9 ...
	I0717 18:31:58.773282   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt.f26848e9: {Name:mkdd3636f13c8ab881f83fc1d3b87dc73c54b436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.773453   77994 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9 ...
	I0717 18:31:58.773469   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9: {Name:mk452c939818aa8ab2959db3b8f6f150d79a61c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.773562   77994 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt.f26848e9 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt
	I0717 18:31:58.773652   77994 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key
	I0717 18:31:58.773708   77994 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key
	I0717 18:31:58.773722   77994 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt with IP's: []
	I0717 18:31:58.991104   77994 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt ...
	I0717 18:31:58.991132   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt: {Name:mk0cd91bc7679c284d1182d4f6ff5007e1d42583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.991292   77994 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key ...
	I0717 18:31:58.991304   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key: {Name:mk71c78a469bc4e8a4c94b29ca757ac1bc46349d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.991457   77994 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:31:58.991495   77994 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:31:58.991504   77994 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:31:58.991526   77994 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:31:58.991546   77994 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:31:58.991566   77994 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:31:58.991606   77994 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:31:58.992203   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:31:59.020109   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:31:59.045102   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:31:59.066401   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:31:59.088628   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 18:31:59.111918   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:31:59.133766   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:31:59.157153   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:31:59.186329   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:31:59.208929   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:31:59.242074   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:31:59.277509   77994 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:31:59.298944   77994 ssh_runner.go:195] Run: openssl version
	I0717 18:31:59.305473   77994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:31:59.318247   77994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:59.325663   77994 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:59.325758   77994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:59.333143   77994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:31:59.347546   77994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:31:59.361626   77994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:31:59.366207   77994 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:31:59.366272   77994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:31:59.371771   77994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:31:59.382330   77994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:31:59.393255   77994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:31:59.400958   77994 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:31:59.401022   77994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:31:59.408425   77994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:31:59.422321   77994 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:31:59.426531   77994 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:31:59.426588   77994 kubeadm.go:392] StartCluster: {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:31:59.426707   77994 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:31:59.426777   77994 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:31:59.464919   77994 cri.go:89] found id: ""
	I0717 18:31:59.465008   77994 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:31:59.474303   77994 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:31:59.483286   77994 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:31:59.492360   77994 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:31:59.492382   77994 kubeadm.go:157] found existing configuration files:
	
	I0717 18:31:59.492433   77994 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:31:59.503928   77994 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:31:59.504000   77994 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:31:59.513822   77994 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:31:59.523256   77994 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:31:59.523322   77994 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:31:59.531799   77994 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:31:59.548122   77994 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:31:59.548180   77994 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:31:59.563272   77994 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:31:59.572332   77994 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:31:59.572394   77994 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:31:59.583016   77994 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:31:59.701044   77994 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:31:59.701101   77994 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:31:59.834726   77994 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:31:59.834877   77994 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:31:59.835005   77994 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:32:00.030478   77994 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:31:57.453157   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:59.454636   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:00.286575   77994 out.go:204]   - Generating certificates and keys ...
	I0717 18:32:00.286711   77994 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:32:00.286805   77994 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:32:00.286902   77994 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:32:00.397498   77994 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:32:00.830524   77994 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:32:01.000442   77994 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 18:32:01.064799   77994 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 18:32:01.065081   77994 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-527415 localhost] and IPs [192.168.61.90 127.0.0.1 ::1]
	I0717 18:32:01.322578   77994 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 18:32:01.322847   77994 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-527415 localhost] and IPs [192.168.61.90 127.0.0.1 ::1]
	I0717 18:32:01.554100   77994 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:32:01.689208   77994 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:32:02.015293   77994 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 18:32:02.015525   77994 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:32:02.124199   77994 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:32:02.176757   77994 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:32:02.573586   77994 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:32:02.897023   77994 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:32:03.051541   77994 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:32:03.052453   77994 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:32:03.055262   77994 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:32:05.743834   76391 kubeadm.go:310] [api-check] The API server is healthy after 7.002303996s
	I0717 18:32:05.760530   76391 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:32:05.778549   76391 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:32:05.817434   76391 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:32:05.817724   76391 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-066175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:32:05.831095   76391 kubeadm.go:310] [bootstrap-token] Using token: 2lj338.n7y99vmpdx4rwfva
	I0717 18:32:01.952471   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:02.946617   64770 pod_ready.go:81] duration metric: took 4m0.000703328s for pod "kube-proxy-8jf5p" in "kube-system" namespace to be "Ready" ...
	E0717 18:32:02.946667   64770 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "kube-proxy-8jf5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:32:02.946688   64770 pod_ready.go:38] duration metric: took 4m13.537210596s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:02.946713   64770 kubeadm.go:597] duration metric: took 4m41.544315272s to restartPrimaryControlPlane
	W0717 18:32:02.946772   64770 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:32:02.946807   64770 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:32:05.832316   76391 out.go:204]   - Configuring RBAC rules ...
	I0717 18:32:05.832468   76391 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:32:05.839739   76391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:32:05.848276   76391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:32:05.852243   76391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:32:05.859383   76391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:32:05.863387   76391 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:32:06.157376   76391 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:32:07.408059   76391 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:32:07.461385   76391 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:32:07.462841   76391 kubeadm.go:310] 
	I0717 18:32:07.462935   76391 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:32:07.462947   76391 kubeadm.go:310] 
	I0717 18:32:07.463042   76391 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:32:07.463055   76391 kubeadm.go:310] 
	I0717 18:32:07.463082   76391 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:32:07.463150   76391 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:32:07.463218   76391 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:32:07.463233   76391 kubeadm.go:310] 
	I0717 18:32:07.463301   76391 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:32:07.463309   76391 kubeadm.go:310] 
	I0717 18:32:07.463370   76391 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:32:07.463380   76391 kubeadm.go:310] 
	I0717 18:32:07.463454   76391 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:32:07.463554   76391 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:32:07.463650   76391 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:32:07.463659   76391 kubeadm.go:310] 
	I0717 18:32:07.463761   76391 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:32:07.463857   76391 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:32:07.463867   76391 kubeadm.go:310] 
	I0717 18:32:07.463974   76391 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2lj338.n7y99vmpdx4rwfva \
	I0717 18:32:07.464106   76391 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:32:07.464136   76391 kubeadm.go:310] 	--control-plane 
	I0717 18:32:07.464145   76391 kubeadm.go:310] 
	I0717 18:32:07.464245   76391 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:32:07.464257   76391 kubeadm.go:310] 
	I0717 18:32:07.464372   76391 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2lj338.n7y99vmpdx4rwfva \
	I0717 18:32:07.464503   76391 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:32:07.465356   76391 kubeadm.go:310] W0717 18:31:55.323806    1276 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:32:07.465692   76391 kubeadm.go:310] W0717 18:31:55.325194    1276 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:32:07.465822   76391 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:32:07.465849   76391 cni.go:84] Creating CNI manager for ""
	I0717 18:32:07.465859   76391 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:32:07.467568   76391 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:32:03.057151   77994 out.go:204]   - Booting up control plane ...
	I0717 18:32:03.057270   77994 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:32:03.057371   77994 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:32:03.057429   77994 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:32:03.076263   77994 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:32:03.077291   77994 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:32:03.077384   77994 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:32:03.214187   77994 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:32:03.214308   77994 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:32:04.215325   77994 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002173836s
	I0717 18:32:04.215473   77994 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:32:07.468992   76391 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:32:07.483826   76391 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:32:07.502648   76391 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:32:07.502804   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:07.502893   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-066175 minikube.k8s.io/updated_at=2024_07_17T18_32_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=no-preload-066175 minikube.k8s.io/primary=true
	I0717 18:32:07.559426   76391 ops.go:34] apiserver oom_adj: -16
	I0717 18:32:07.721988   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:08.222446   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:08.722013   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:09.222076   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:09.214689   77994 kubeadm.go:310] [api-check] The API server is healthy after 5.002534955s
	I0717 18:32:09.230696   77994 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:32:09.252928   77994 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:32:09.284112   77994 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:32:09.284388   77994 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-527415 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:32:09.297006   77994 kubeadm.go:310] [bootstrap-token] Using token: a3ak5v.cv98bs6avaxmk4mp
	I0717 18:32:09.298461   77994 out.go:204]   - Configuring RBAC rules ...
	I0717 18:32:09.298606   77994 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:32:09.308006   77994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:32:09.315914   77994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:32:09.319324   77994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:32:09.322805   77994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:32:09.326217   77994 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:32:09.622993   77994 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:32:10.055436   77994 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:32:10.622037   77994 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:32:10.622078   77994 kubeadm.go:310] 
	I0717 18:32:10.622176   77994 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:32:10.622206   77994 kubeadm.go:310] 
	I0717 18:32:10.622314   77994 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:32:10.622342   77994 kubeadm.go:310] 
	I0717 18:32:10.622386   77994 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:32:10.622460   77994 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:32:10.622557   77994 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:32:10.622571   77994 kubeadm.go:310] 
	I0717 18:32:10.622671   77994 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:32:10.622682   77994 kubeadm.go:310] 
	I0717 18:32:10.622757   77994 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:32:10.622767   77994 kubeadm.go:310] 
	I0717 18:32:10.622837   77994 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:32:10.622946   77994 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:32:10.623047   77994 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:32:10.623057   77994 kubeadm.go:310] 
	I0717 18:32:10.623149   77994 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:32:10.623249   77994 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:32:10.623262   77994 kubeadm.go:310] 
	I0717 18:32:10.623377   77994 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a3ak5v.cv98bs6avaxmk4mp \
	I0717 18:32:10.623513   77994 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:32:10.623549   77994 kubeadm.go:310] 	--control-plane 
	I0717 18:32:10.623558   77994 kubeadm.go:310] 
	I0717 18:32:10.623668   77994 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:32:10.623679   77994 kubeadm.go:310] 
	I0717 18:32:10.623784   77994 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a3ak5v.cv98bs6avaxmk4mp \
	I0717 18:32:10.623913   77994 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:32:10.624051   77994 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:32:10.624074   77994 cni.go:84] Creating CNI manager for ""
	I0717 18:32:10.624087   77994 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:32:10.625793   77994 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:32:09.722118   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:10.222422   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:10.722519   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:11.222021   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:11.722103   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:12.222243   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:12.299594   76391 kubeadm.go:1113] duration metric: took 4.796842133s to wait for elevateKubeSystemPrivileges
	I0717 18:32:12.299625   76391 kubeadm.go:394] duration metric: took 17.226686695s to StartCluster
	I0717 18:32:12.299643   76391 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:12.299710   76391 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:32:12.300525   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:12.300734   76391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 18:32:12.300742   76391 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:32:12.300799   76391 addons.go:69] Setting storage-provisioner=true in profile "no-preload-066175"
	I0717 18:32:12.300817   76391 addons.go:69] Setting default-storageclass=true in profile "no-preload-066175"
	I0717 18:32:12.300836   76391 addons.go:234] Setting addon storage-provisioner=true in "no-preload-066175"
	I0717 18:32:12.300845   76391 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-066175"
	I0717 18:32:12.300727   76391 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:32:12.300864   76391 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:32:12.300930   76391 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:32:12.301301   76391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:12.301308   76391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:12.301337   76391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:12.301349   76391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:12.303700   76391 out.go:177] * Verifying Kubernetes components...
	I0717 18:32:12.305055   76391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:32:12.316928   76391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I0717 18:32:12.316965   76391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41587
	I0717 18:32:12.317342   76391 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:12.317395   76391 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:12.317841   76391 main.go:141] libmachine: Using API Version  1
	I0717 18:32:12.317861   76391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:12.318009   76391 main.go:141] libmachine: Using API Version  1
	I0717 18:32:12.318035   76391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:12.318198   76391 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:12.318399   76391 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:12.318440   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:32:12.318952   76391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:12.318983   76391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:12.322050   76391 addons.go:234] Setting addon default-storageclass=true in "no-preload-066175"
	I0717 18:32:12.322094   76391 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:32:12.322489   76391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:12.322520   76391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:12.336191   76391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46215
	I0717 18:32:12.336721   76391 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:12.337242   76391 main.go:141] libmachine: Using API Version  1
	I0717 18:32:12.337266   76391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:12.337638   76391 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:12.337829   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:32:12.338963   76391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0717 18:32:12.339440   76391 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:12.339824   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:32:12.340020   76391 main.go:141] libmachine: Using API Version  1
	I0717 18:32:12.340045   76391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:12.340375   76391 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:12.340926   76391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:12.340994   76391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:12.341956   76391 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:32:10.627191   77994 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:32:10.640013   77994 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:32:10.658487   77994 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:32:10.658556   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:10.658562   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-527415 minikube.k8s.io/updated_at=2024_07_17T18_32_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=embed-certs-527415 minikube.k8s.io/primary=true
	I0717 18:32:10.866189   77994 ops.go:34] apiserver oom_adj: -16
	I0717 18:32:10.866330   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:11.366429   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:11.867195   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:12.367254   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:12.343751   76391 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:32:12.343771   76391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:32:12.343790   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:32:12.347332   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:32:12.347869   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:32:12.347895   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:32:12.348072   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:32:12.348259   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:32:12.348469   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:32:12.348622   76391 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:32:12.357745   76391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
	I0717 18:32:12.358161   76391 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:12.358642   76391 main.go:141] libmachine: Using API Version  1
	I0717 18:32:12.358655   76391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:12.359015   76391 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:12.359205   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:32:12.360820   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:32:12.361030   76391 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:32:12.361043   76391 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:32:12.361062   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:32:12.363864   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:32:12.364271   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:32:12.364291   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:32:12.364480   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:32:12.364644   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:32:12.364777   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:32:12.364902   76391 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:32:12.447183   76391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 18:32:12.489400   76391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:32:12.603372   76391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:32:12.617104   76391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:32:12.789894   76391 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0717 18:32:12.790793   76391 node_ready.go:35] waiting up to 6m0s for node "no-preload-066175" to be "Ready" ...
	I0717 18:32:12.804020   76391 node_ready.go:49] node "no-preload-066175" has status "Ready":"True"
	I0717 18:32:12.804042   76391 node_ready.go:38] duration metric: took 13.208161ms for node "no-preload-066175" to be "Ready" ...
	I0717 18:32:12.804053   76391 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:12.816264   76391 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-qb7wm" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:12.969841   76391 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:12.969868   76391 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:32:12.970124   76391 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:12.970143   76391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:12.970154   76391 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:12.970165   76391 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:32:12.970422   76391 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:12.970439   76391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:12.980050   76391 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:12.980070   76391 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:32:12.980320   76391 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:12.980337   76391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:13.138708   76391 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:13.138735   76391 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:32:13.139056   76391 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:32:13.139086   76391 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:13.139100   76391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:13.139123   76391 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:13.139135   76391 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:32:13.139369   76391 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:13.139386   76391 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:32:13.139388   76391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:13.141708   76391 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0717 18:32:13.143046   76391 addons.go:510] duration metric: took 842.300638ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0717 18:32:13.294235   76391 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-066175" context rescaled to 1 replicas
	I0717 18:32:13.319940   76391 pod_ready.go:97] error getting pod "coredns-5cfdc65f69-qb7wm" in "kube-system" namespace (skipping!): pods "coredns-5cfdc65f69-qb7wm" not found
	I0717 18:32:13.319964   76391 pod_ready.go:81] duration metric: took 503.676164ms for pod "coredns-5cfdc65f69-qb7wm" in "kube-system" namespace to be "Ready" ...
	E0717 18:32:13.319972   76391 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5cfdc65f69-qb7wm" in "kube-system" namespace (skipping!): pods "coredns-5cfdc65f69-qb7wm" not found
	I0717 18:32:13.319979   76391 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:12.867153   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:13.366677   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:13.867151   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:14.367386   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:14.866672   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:15.366599   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:15.866972   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:16.366534   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:16.867423   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:17.366409   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:15.326751   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:17.327034   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:17.866993   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:18.366558   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:18.867336   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:19.366437   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:19.867145   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:20.366941   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:20.866366   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:21.366979   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:21.866895   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:22.366419   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:22.866835   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:23.049357   77994 kubeadm.go:1113] duration metric: took 12.390858013s to wait for elevateKubeSystemPrivileges
	I0717 18:32:23.049391   77994 kubeadm.go:394] duration metric: took 23.6228077s to StartCluster
	I0717 18:32:23.049412   77994 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:23.049500   77994 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:32:23.051540   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:23.051799   77994 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 18:32:23.051806   77994 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:32:23.051902   77994 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:32:23.051986   77994 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-527415"
	I0717 18:32:23.052005   77994 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:32:23.052019   77994 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-527415"
	I0717 18:32:23.052018   77994 addons.go:69] Setting default-storageclass=true in profile "embed-certs-527415"
	I0717 18:32:23.052047   77994 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-527415"
	I0717 18:32:23.052069   77994 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:32:23.052493   77994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:23.052518   77994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:23.052576   77994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:23.052623   77994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:23.053376   77994 out.go:177] * Verifying Kubernetes components...
	I0717 18:32:23.054586   77994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:32:23.067519   77994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45117
	I0717 18:32:23.067519   77994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45535
	I0717 18:32:23.068056   77994 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:23.068101   77994 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:23.068603   77994 main.go:141] libmachine: Using API Version  1
	I0717 18:32:23.068622   77994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:23.068784   77994 main.go:141] libmachine: Using API Version  1
	I0717 18:32:23.068815   77994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:23.068929   77994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:23.069117   77994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:23.069427   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:32:23.069550   77994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:23.069592   77994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:23.072592   77994 addons.go:234] Setting addon default-storageclass=true in "embed-certs-527415"
	I0717 18:32:23.072643   77994 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:32:23.072922   77994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:23.072980   77994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:23.084859   77994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44261
	I0717 18:32:23.085308   77994 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:23.085836   77994 main.go:141] libmachine: Using API Version  1
	I0717 18:32:23.085860   77994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:23.086210   77994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:23.086424   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:32:23.087266   77994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37363
	I0717 18:32:23.087613   77994 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:23.088096   77994 main.go:141] libmachine: Using API Version  1
	I0717 18:32:23.088118   77994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:23.088433   77994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:23.088539   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:32:23.088953   77994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:23.088986   77994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:23.091021   77994 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:32:19.327651   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:21.826194   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:23.827863   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:23.092654   77994 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:32:23.092675   77994 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:32:23.092692   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:32:23.095593   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:32:23.096061   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:32:23.096110   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:32:23.096300   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:32:23.096499   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:32:23.096657   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:32:23.096820   77994 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:32:23.106161   77994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0717 18:32:23.106530   77994 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:23.107007   77994 main.go:141] libmachine: Using API Version  1
	I0717 18:32:23.107023   77994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:23.107300   77994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:23.107445   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:32:23.108998   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:32:23.109166   77994 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:32:23.109175   77994 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:32:23.109187   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:32:23.111274   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:32:23.111551   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:32:23.111571   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:32:23.111728   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:32:23.111877   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:32:23.112017   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:32:23.112106   77994 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:32:23.295935   77994 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:32:23.296022   77994 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 18:32:23.388927   77994 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:32:23.431711   77994 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:32:23.850201   77994 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0717 18:32:23.851409   77994 node_ready.go:35] waiting up to 6m0s for node "embed-certs-527415" to be "Ready" ...
	I0717 18:32:23.863182   77994 node_ready.go:49] node "embed-certs-527415" has status "Ready":"True"
	I0717 18:32:23.863208   77994 node_ready.go:38] duration metric: took 11.769585ms for node "embed-certs-527415" to be "Ready" ...
	I0717 18:32:23.863219   77994 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:23.878221   77994 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:23.922366   77994 pod_ready.go:92] pod "etcd-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:23.922397   77994 pod_ready.go:81] duration metric: took 44.145148ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:23.922412   77994 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:23.972286   77994 pod_ready.go:92] pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:23.972317   77994 pod_ready.go:81] duration metric: took 49.896346ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:23.972332   77994 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:24.004551   77994 pod_ready.go:92] pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:24.004584   77994 pod_ready.go:81] duration metric: took 32.243425ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:24.004600   77994 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:24.259424   77994 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:24.259454   77994 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:32:24.259453   77994 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:24.259472   77994 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:32:24.259854   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:32:24.259862   77994 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:24.259875   77994 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:24.259883   77994 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:24.259892   77994 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:32:24.259892   77994 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:24.259955   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:32:24.259972   77994 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:24.260042   77994 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:24.260077   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:32:24.260119   77994 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:32:24.260145   77994 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:24.260163   77994 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:24.260503   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:32:24.260567   77994 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:24.260690   77994 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:24.273688   77994 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:24.273713   77994 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:32:24.273996   77994 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:24.274011   77994 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:24.276422   77994 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 18:32:24.506526   64770 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (21.559697462s)
	I0717 18:32:24.506598   64770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:32:24.522465   64770 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:32:24.532133   64770 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:32:24.544821   64770 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:32:24.544845   64770 kubeadm.go:157] found existing configuration files:
	
	I0717 18:32:24.544897   64770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:32:24.554424   64770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:32:24.554488   64770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:32:24.566237   64770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:32:24.575272   64770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:32:24.575334   64770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:32:24.584999   64770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:32:24.593607   64770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:32:24.593669   64770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:32:24.602671   64770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:32:24.614348   64770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:32:24.614410   64770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:32:24.626954   64770 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:32:24.684529   64770 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:32:24.684607   64770 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:32:24.829772   64770 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:32:24.829896   64770 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:32:24.830052   64770 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:32:25.042058   64770 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:32:25.043848   64770 out.go:204]   - Generating certificates and keys ...
	I0717 18:32:25.043957   64770 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:32:25.044053   64770 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:32:25.044179   64770 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:32:25.044269   64770 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:32:25.044369   64770 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:32:25.044458   64770 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:32:25.044530   64770 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:32:25.044640   64770 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:32:25.044744   64770 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:32:25.044856   64770 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:32:25.044915   64770 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:32:25.045017   64770 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:32:25.133990   64770 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:32:25.333240   64770 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:32:25.496733   64770 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:32:25.669974   64770 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:32:25.748419   64770 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:32:25.748921   64770 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:32:25.751254   64770 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:32:25.752949   64770 out.go:204]   - Booting up control plane ...
	I0717 18:32:25.753065   64770 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:32:25.753188   64770 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:32:25.753300   64770 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:32:25.773041   64770 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:32:25.774016   64770 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:32:25.774075   64770 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:32:24.277689   77994 addons.go:510] duration metric: took 1.225784419s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 18:32:24.353967   77994 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-527415" context rescaled to 1 replicas
	I0717 18:32:25.510657   77994 pod_ready.go:92] pod "kube-proxy-jltfs" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:25.510700   77994 pod_ready.go:81] duration metric: took 1.506082868s for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:25.510712   77994 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:25.515157   77994 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:25.515190   77994 pod_ready.go:81] duration metric: took 4.469793ms for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:25.515199   77994 pod_ready.go:38] duration metric: took 1.651968378s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:25.515216   77994 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:32:25.515265   77994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:32:25.530170   77994 api_server.go:72] duration metric: took 2.478333128s to wait for apiserver process to appear ...
	I0717 18:32:25.530195   77994 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:32:25.530213   77994 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:32:25.535348   77994 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:32:25.536289   77994 api_server.go:141] control plane version: v1.30.2
	I0717 18:32:25.536309   77994 api_server.go:131] duration metric: took 6.106885ms to wait for apiserver health ...
	I0717 18:32:25.536318   77994 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:32:25.657797   77994 system_pods.go:59] 7 kube-system pods found
	I0717 18:32:25.657831   77994 system_pods.go:61] "coredns-7db6d8ff4d-2fnlb" [86d50e9b-fb88-4332-90c5-a969b0654635] Running
	I0717 18:32:25.657838   77994 system_pods.go:61] "etcd-embed-certs-527415" [9d8ac0a8-4639-48d8-8ac4-88b0bd1e2082] Running
	I0717 18:32:25.657844   77994 system_pods.go:61] "kube-apiserver-embed-certs-527415" [7f72c4f9-f1db-4ac6-83e1-2b94245107c9] Running
	I0717 18:32:25.657851   77994 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [96081a97-2a90-4fec-84cb-9a399a43aeb4] Running
	I0717 18:32:25.657857   77994 system_pods.go:61] "kube-proxy-jltfs" [27f6259e-80cc-4881-bb06-6a2ad529179c] Running
	I0717 18:32:25.657862   77994 system_pods.go:61] "kube-scheduler-embed-certs-527415" [bed7b515-7ab0-460c-a13f-037f29576f30] Running
	I0717 18:32:25.657867   77994 system_pods.go:61] "storage-provisioner" [ccb34b69-d28d-477e-8c7a-0acdc547bec7] Running
	I0717 18:32:25.657874   77994 system_pods.go:74] duration metric: took 121.550087ms to wait for pod list to return data ...
	I0717 18:32:25.657885   77994 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:32:25.854953   77994 default_sa.go:45] found service account: "default"
	I0717 18:32:25.854985   77994 default_sa.go:55] duration metric: took 197.091585ms for default service account to be created ...
	I0717 18:32:25.854994   77994 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:32:26.058082   77994 system_pods.go:86] 7 kube-system pods found
	I0717 18:32:26.058107   77994 system_pods.go:89] "coredns-7db6d8ff4d-2fnlb" [86d50e9b-fb88-4332-90c5-a969b0654635] Running
	I0717 18:32:26.058112   77994 system_pods.go:89] "etcd-embed-certs-527415" [9d8ac0a8-4639-48d8-8ac4-88b0bd1e2082] Running
	I0717 18:32:26.058116   77994 system_pods.go:89] "kube-apiserver-embed-certs-527415" [7f72c4f9-f1db-4ac6-83e1-2b94245107c9] Running
	I0717 18:32:26.058120   77994 system_pods.go:89] "kube-controller-manager-embed-certs-527415" [96081a97-2a90-4fec-84cb-9a399a43aeb4] Running
	I0717 18:32:26.058124   77994 system_pods.go:89] "kube-proxy-jltfs" [27f6259e-80cc-4881-bb06-6a2ad529179c] Running
	I0717 18:32:26.058128   77994 system_pods.go:89] "kube-scheduler-embed-certs-527415" [bed7b515-7ab0-460c-a13f-037f29576f30] Running
	I0717 18:32:26.058131   77994 system_pods.go:89] "storage-provisioner" [ccb34b69-d28d-477e-8c7a-0acdc547bec7] Running
	I0717 18:32:26.058137   77994 system_pods.go:126] duration metric: took 203.139243ms to wait for k8s-apps to be running ...
	I0717 18:32:26.058144   77994 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:32:26.058184   77994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:32:26.072008   77994 system_svc.go:56] duration metric: took 13.857466ms WaitForService to wait for kubelet
	I0717 18:32:26.072029   77994 kubeadm.go:582] duration metric: took 3.020194343s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:32:26.072053   77994 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:32:26.256016   77994 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:32:26.256045   77994 node_conditions.go:123] node cpu capacity is 2
	I0717 18:32:26.256059   77994 node_conditions.go:105] duration metric: took 183.999929ms to run NodePressure ...
	I0717 18:32:26.256070   77994 start.go:241] waiting for startup goroutines ...
	I0717 18:32:26.256076   77994 start.go:246] waiting for cluster config update ...
	I0717 18:32:26.256086   77994 start.go:255] writing updated cluster config ...
	I0717 18:32:26.256362   77994 ssh_runner.go:195] Run: rm -f paused
	I0717 18:32:26.309934   77994 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:32:26.311896   77994 out.go:177] * Done! kubectl is now configured to use "embed-certs-527415" cluster and "default" namespace by default
	I0717 18:32:26.326787   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:28.327057   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:25.906961   64770 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:32:25.907084   64770 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:32:26.908851   64770 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001768612s
	I0717 18:32:26.908965   64770 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:32:31.410170   64770 kubeadm.go:310] [api-check] The API server is healthy after 4.501210398s
	I0717 18:32:31.423141   64770 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:32:31.437827   64770 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:32:31.459779   64770 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:32:31.460045   64770 kubeadm.go:310] [mark-control-plane] Marking the node pause-371172 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:32:31.470275   64770 kubeadm.go:310] [bootstrap-token] Using token: 5jyj9a.o3rmgl5b7o1vg2ev
	I0717 18:32:31.471766   64770 out.go:204]   - Configuring RBAC rules ...
	I0717 18:32:31.471898   64770 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:32:31.478042   64770 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:32:31.491995   64770 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:32:31.499200   64770 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:32:31.502657   64770 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:32:31.505464   64770 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:32:31.820754   64770 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:32:32.246760   64770 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:32:32.821662   64770 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:32:32.822716   64770 kubeadm.go:310] 
	I0717 18:32:32.822787   64770 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:32:32.822799   64770 kubeadm.go:310] 
	I0717 18:32:32.822911   64770 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:32:32.822936   64770 kubeadm.go:310] 
	I0717 18:32:32.822972   64770 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:32:32.823052   64770 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:32:32.823123   64770 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:32:32.823134   64770 kubeadm.go:310] 
	I0717 18:32:32.823204   64770 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:32:32.823214   64770 kubeadm.go:310] 
	I0717 18:32:32.823288   64770 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:32:32.823302   64770 kubeadm.go:310] 
	I0717 18:32:32.823367   64770 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:32:32.823462   64770 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:32:32.823548   64770 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:32:32.823557   64770 kubeadm.go:310] 
	I0717 18:32:32.823681   64770 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:32:32.823795   64770 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:32:32.823811   64770 kubeadm.go:310] 
	I0717 18:32:32.823904   64770 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5jyj9a.o3rmgl5b7o1vg2ev \
	I0717 18:32:32.824022   64770 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:32:32.824052   64770 kubeadm.go:310] 	--control-plane 
	I0717 18:32:32.824058   64770 kubeadm.go:310] 
	I0717 18:32:32.824156   64770 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:32:32.824166   64770 kubeadm.go:310] 
	I0717 18:32:32.824259   64770 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5jyj9a.o3rmgl5b7o1vg2ev \
	I0717 18:32:32.824372   64770 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:32:32.825050   64770 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:32:32.825081   64770 cni.go:84] Creating CNI manager for ""
	I0717 18:32:32.825091   64770 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:32:32.826796   64770 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:32:30.826334   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:32.827864   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:32.828019   64770 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:32:32.838128   64770 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:32:32.855649   64770 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:32:32.855717   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:32.855756   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-371172 minikube.k8s.io/updated_at=2024_07_17T18_32_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=pause-371172 minikube.k8s.io/primary=true
	I0717 18:32:32.892253   64770 ops.go:34] apiserver oom_adj: -16
	I0717 18:32:32.955417   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:33.455643   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:33.955923   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:34.455692   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:34.955577   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:35.456396   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:35.326762   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:37.328053   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:35.956455   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:36.455679   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:36.956189   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:37.455691   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:37.955711   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:38.455564   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:38.955808   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:39.455805   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:39.955504   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:40.455927   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:39.827074   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:42.327739   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:40.955576   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:41.456147   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:41.955780   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:42.455962   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:42.955917   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:43.456077   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:43.955935   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:44.456127   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:44.956361   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:45.456107   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:45.956243   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:46.456443   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:46.956212   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:47.455607   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:47.554718   64770 kubeadm.go:1113] duration metric: took 14.699058976s to wait for elevateKubeSystemPrivileges
	I0717 18:32:47.554754   64770 kubeadm.go:394] duration metric: took 5m26.289545826s to StartCluster
	I0717 18:32:47.554774   64770 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:47.554859   64770 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:32:47.556276   64770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:47.556540   64770 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:32:47.556599   64770 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:32:47.556761   64770 config.go:182] Loaded profile config "pause-371172": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:32:47.558184   64770 out.go:177] * Verifying Kubernetes components...
	I0717 18:32:47.559039   64770 out.go:177] * Enabled addons: 
	I0717 18:32:44.826544   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:47.326337   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:49.327760   76391 pod_ready.go:92] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.327780   76391 pod_ready.go:81] duration metric: took 36.007794739s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.327788   76391 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.332810   76391 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.332837   76391 pod_ready.go:81] duration metric: took 5.041956ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.332850   76391 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.337104   76391 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.337124   76391 pod_ready.go:81] duration metric: took 4.266061ms for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.337133   76391 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.342354   76391 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.342372   76391 pod_ready.go:81] duration metric: took 5.231615ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.342382   76391 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.346851   76391 pod_ready.go:92] pod "kube-proxy-tn5xn" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.346867   76391 pod_ready.go:81] duration metric: took 4.471918ms for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.346876   76391 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.724382   76391 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.724415   76391 pod_ready.go:81] duration metric: took 377.530235ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.724427   76391 pod_ready.go:38] duration metric: took 36.920360552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:49.724443   76391 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:32:49.724502   76391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:32:49.739919   76391 api_server.go:72] duration metric: took 37.439039525s to wait for apiserver process to appear ...
	I0717 18:32:49.739941   76391 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:32:49.739957   76391 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:32:49.744304   76391 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:32:49.745279   76391 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:32:49.745298   76391 api_server.go:131] duration metric: took 5.350779ms to wait for apiserver health ...
	I0717 18:32:49.745305   76391 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:32:49.928037   76391 system_pods.go:59] 7 kube-system pods found
	I0717 18:32:49.928084   76391 system_pods.go:61] "coredns-5cfdc65f69-spj2w" [6849b651-9346-4d96-97a7-88eca7bbd50a] Running
	I0717 18:32:49.928091   76391 system_pods.go:61] "etcd-no-preload-066175" [be012488-220b-421d-bf16-a3623fafb8fa] Running
	I0717 18:32:49.928097   76391 system_pods.go:61] "kube-apiserver-no-preload-066175" [4292a786-61f3-405d-8784-ec8a58e1b124] Running
	I0717 18:32:49.928102   76391 system_pods.go:61] "kube-controller-manager-no-preload-066175" [937a48f4-7fca-4cee-bb50-51f1720960da] Running
	I0717 18:32:49.928106   76391 system_pods.go:61] "kube-proxy-tn5xn" [f0a910b3-98b6-470f-a5a2-e49369ecb733] Running
	I0717 18:32:49.928116   76391 system_pods.go:61] "kube-scheduler-no-preload-066175" [ffa2475c-7a5a-4988-89a2-4727e07356cb] Running
	I0717 18:32:49.928120   76391 system_pods.go:61] "storage-provisioner" [19914ecc-2fcc-4cb8-bd78-fb6891dcf85d] Running
	I0717 18:32:49.928128   76391 system_pods.go:74] duration metric: took 182.816852ms to wait for pod list to return data ...
	I0717 18:32:49.928136   76391 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:32:50.125244   76391 default_sa.go:45] found service account: "default"
	I0717 18:32:50.125274   76391 default_sa.go:55] duration metric: took 197.131625ms for default service account to be created ...
	I0717 18:32:50.125284   76391 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:32:50.327165   76391 system_pods.go:86] 7 kube-system pods found
	I0717 18:32:50.327192   76391 system_pods.go:89] "coredns-5cfdc65f69-spj2w" [6849b651-9346-4d96-97a7-88eca7bbd50a] Running
	I0717 18:32:50.327197   76391 system_pods.go:89] "etcd-no-preload-066175" [be012488-220b-421d-bf16-a3623fafb8fa] Running
	I0717 18:32:50.327201   76391 system_pods.go:89] "kube-apiserver-no-preload-066175" [4292a786-61f3-405d-8784-ec8a58e1b124] Running
	I0717 18:32:50.327205   76391 system_pods.go:89] "kube-controller-manager-no-preload-066175" [937a48f4-7fca-4cee-bb50-51f1720960da] Running
	I0717 18:32:50.327209   76391 system_pods.go:89] "kube-proxy-tn5xn" [f0a910b3-98b6-470f-a5a2-e49369ecb733] Running
	I0717 18:32:50.327213   76391 system_pods.go:89] "kube-scheduler-no-preload-066175" [ffa2475c-7a5a-4988-89a2-4727e07356cb] Running
	I0717 18:32:50.327216   76391 system_pods.go:89] "storage-provisioner" [19914ecc-2fcc-4cb8-bd78-fb6891dcf85d] Running
	I0717 18:32:50.327222   76391 system_pods.go:126] duration metric: took 201.933585ms to wait for k8s-apps to be running ...
	I0717 18:32:50.327227   76391 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:32:50.327272   76391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:32:50.341672   76391 system_svc.go:56] duration metric: took 14.434151ms WaitForService to wait for kubelet
	I0717 18:32:50.341703   76391 kubeadm.go:582] duration metric: took 38.040827725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:32:50.341724   76391 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:32:50.525046   76391 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:32:50.525074   76391 node_conditions.go:123] node cpu capacity is 2
	I0717 18:32:50.525085   76391 node_conditions.go:105] duration metric: took 183.356783ms to run NodePressure ...
	I0717 18:32:50.525095   76391 start.go:241] waiting for startup goroutines ...
	I0717 18:32:50.525106   76391 start.go:246] waiting for cluster config update ...
	I0717 18:32:50.525115   76391 start.go:255] writing updated cluster config ...
	I0717 18:32:50.525370   76391 ssh_runner.go:195] Run: rm -f paused
	I0717 18:32:50.572889   76391 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 18:32:50.574822   76391 out.go:177] * Done! kubectl is now configured to use "no-preload-066175" cluster and "default" namespace by default
	I0717 18:32:47.560038   64770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:32:47.560806   64770 addons.go:510] duration metric: took 4.212164ms for enable addons: enabled=[]
	I0717 18:32:47.732445   64770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:32:47.765302   64770 node_ready.go:35] waiting up to 6m0s for node "pause-371172" to be "Ready" ...
	I0717 18:32:47.773105   64770 node_ready.go:49] node "pause-371172" has status "Ready":"True"
	I0717 18:32:47.773124   64770 node_ready.go:38] duration metric: took 7.786324ms for node "pause-371172" to be "Ready" ...
	I0717 18:32:47.773132   64770 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:47.780749   64770 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-884nf" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.294878   64770 pod_ready.go:92] pod "coredns-7db6d8ff4d-884nf" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.294901   64770 pod_ready.go:81] duration metric: took 1.514125468s for pod "coredns-7db6d8ff4d-884nf" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.294910   64770 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fds59" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.305093   64770 pod_ready.go:92] pod "coredns-7db6d8ff4d-fds59" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.305114   64770 pod_ready.go:81] duration metric: took 10.197745ms for pod "coredns-7db6d8ff4d-fds59" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.305125   64770 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.310353   64770 pod_ready.go:92] pod "etcd-pause-371172" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.310376   64770 pod_ready.go:81] duration metric: took 5.245469ms for pod "etcd-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.310384   64770 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.315575   64770 pod_ready.go:92] pod "kube-apiserver-pause-371172" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.315595   64770 pod_ready.go:81] duration metric: took 5.20478ms for pod "kube-apiserver-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.315604   64770 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.368580   64770 pod_ready.go:92] pod "kube-controller-manager-pause-371172" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.368604   64770 pod_ready.go:81] duration metric: took 52.994204ms for pod "kube-controller-manager-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.368616   64770 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m9svn" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.769496   64770 pod_ready.go:92] pod "kube-proxy-m9svn" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.769516   64770 pod_ready.go:81] duration metric: took 400.894448ms for pod "kube-proxy-m9svn" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.769529   64770 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:50.170101   64770 pod_ready.go:92] pod "kube-scheduler-pause-371172" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:50.170121   64770 pod_ready.go:81] duration metric: took 400.586022ms for pod "kube-scheduler-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:50.170130   64770 pod_ready.go:38] duration metric: took 2.396988581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:50.170143   64770 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:32:50.170187   64770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:32:50.187214   64770 api_server.go:72] duration metric: took 2.630643931s to wait for apiserver process to appear ...
	I0717 18:32:50.187234   64770 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:32:50.187250   64770 api_server.go:253] Checking apiserver healthz at https://192.168.50.21:8443/healthz ...
	I0717 18:32:50.193392   64770 api_server.go:279] https://192.168.50.21:8443/healthz returned 200:
	ok
	I0717 18:32:50.194490   64770 api_server.go:141] control plane version: v1.30.2
	I0717 18:32:50.194514   64770 api_server.go:131] duration metric: took 7.272389ms to wait for apiserver health ...
	I0717 18:32:50.194523   64770 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:32:50.371172   64770 system_pods.go:59] 7 kube-system pods found
	I0717 18:32:50.371200   64770 system_pods.go:61] "coredns-7db6d8ff4d-884nf" [27cac9c3-742d-416c-a281-0aaf074fbd3a] Running
	I0717 18:32:50.371205   64770 system_pods.go:61] "coredns-7db6d8ff4d-fds59" [753107be-ccbf-431f-8a2e-e79bdb96f7c4] Running
	I0717 18:32:50.371209   64770 system_pods.go:61] "etcd-pause-371172" [40b5faff-c706-4a73-8a4b-b71a85a6360f] Running
	I0717 18:32:50.371212   64770 system_pods.go:61] "kube-apiserver-pause-371172" [fa9bc423-2462-4ede-ab92-3cc052996937] Running
	I0717 18:32:50.371216   64770 system_pods.go:61] "kube-controller-manager-pause-371172" [62f978f8-ea27-438e-9632-b7367c7054c4] Running
	I0717 18:32:50.371219   64770 system_pods.go:61] "kube-proxy-m9svn" [9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e] Running
	I0717 18:32:50.371222   64770 system_pods.go:61] "kube-scheduler-pause-371172" [7974024d-6422-42eb-a8d7-f21d57cfe807] Running
	I0717 18:32:50.371227   64770 system_pods.go:74] duration metric: took 176.697366ms to wait for pod list to return data ...
	I0717 18:32:50.371234   64770 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:32:50.569599   64770 default_sa.go:45] found service account: "default"
	I0717 18:32:50.569629   64770 default_sa.go:55] duration metric: took 198.388656ms for default service account to be created ...
	I0717 18:32:50.569646   64770 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:32:50.771808   64770 system_pods.go:86] 7 kube-system pods found
	I0717 18:32:50.771838   64770 system_pods.go:89] "coredns-7db6d8ff4d-884nf" [27cac9c3-742d-416c-a281-0aaf074fbd3a] Running
	I0717 18:32:50.771846   64770 system_pods.go:89] "coredns-7db6d8ff4d-fds59" [753107be-ccbf-431f-8a2e-e79bdb96f7c4] Running
	I0717 18:32:50.771852   64770 system_pods.go:89] "etcd-pause-371172" [40b5faff-c706-4a73-8a4b-b71a85a6360f] Running
	I0717 18:32:50.771858   64770 system_pods.go:89] "kube-apiserver-pause-371172" [fa9bc423-2462-4ede-ab92-3cc052996937] Running
	I0717 18:32:50.771864   64770 system_pods.go:89] "kube-controller-manager-pause-371172" [62f978f8-ea27-438e-9632-b7367c7054c4] Running
	I0717 18:32:50.771870   64770 system_pods.go:89] "kube-proxy-m9svn" [9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e] Running
	I0717 18:32:50.771877   64770 system_pods.go:89] "kube-scheduler-pause-371172" [7974024d-6422-42eb-a8d7-f21d57cfe807] Running
	I0717 18:32:50.771886   64770 system_pods.go:126] duration metric: took 202.233078ms to wait for k8s-apps to be running ...
	I0717 18:32:50.771898   64770 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:32:50.771938   64770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:32:50.786825   64770 system_svc.go:56] duration metric: took 14.917593ms WaitForService to wait for kubelet
	I0717 18:32:50.786857   64770 kubeadm.go:582] duration metric: took 3.23028737s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:32:50.786880   64770 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:32:50.969667   64770 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:32:50.969689   64770 node_conditions.go:123] node cpu capacity is 2
	I0717 18:32:50.969697   64770 node_conditions.go:105] duration metric: took 182.808234ms to run NodePressure ...
	I0717 18:32:50.969707   64770 start.go:241] waiting for startup goroutines ...
	I0717 18:32:50.969713   64770 start.go:246] waiting for cluster config update ...
	I0717 18:32:50.969720   64770 start.go:255] writing updated cluster config ...
	I0717 18:32:50.970016   64770 ssh_runner.go:195] Run: rm -f paused
	I0717 18:32:51.017830   64770 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:32:51.019761   64770 out.go:177] * Done! kubectl is now configured to use "pause-371172" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.669386722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241171669361890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f03016eb-bdf7-4673-8326-392778026e8c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.670030115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bce286c5-3c98-49e2-8adf-ff22f77fda6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.670088701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bce286c5-3c98-49e2-8adf-ff22f77fda6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.670314017Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b1e482d1c8ef316e529644708a390e6e7f46dc5f9b2a3272f391471372039b,PodSandboxId:44ffd8512f7ade9b1821a6405a025b08a7faade2182bb67cbf7ed33b961a60ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168292272767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fds59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753107be-ccbf-431f-8a2e-e79bdb96f7c4,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9edc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922a7e0f262a8282d9e72f42fdeca7478428c833dbeb2a9b95a5738d1ef95e69,PodSandboxId:4c4408303c67164222da84a7bf59e287a06e4fc94ed1085a051669523e55e20d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168214407421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-884nf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 27cac9c3-742d-416c-a281-0aaf074fbd3a,},Annotations:map[string]string{io.kubernetes.container.hash: ed3513db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26db6113dcb1ff79fcd77d6d39b46c69b8761312bf5238a27ffd2e11eda174f7,PodSandboxId:06c5f342d1f79b4c2d91bf5328ab371a7f226ef280618ba0f1d3990c7d0c6c34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,Cre
atedAt:1721241167795198780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9svn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 4dd93799,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b08dd954d955c74b4b84f21646fa33facb15fc2c1e53a68975c3187779cc6a29,PodSandboxId:ad26dc0b8c7874b3a7bbc2e23810502f05e3201a2756f197b8bd1d96e6efa775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241147345888833,La
bels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6af54c2061de253ace2de68751df8da5,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1a2aa0f51d772dc4abbce1d3004d6b52f7961de71561a8776ab799c79b8df0,PodSandboxId:560487aab164542ad8417325db6c3c052cb855002b2abbf25560b824f4736d5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241147342168856,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fa797dfdfb736f9e861ba1561f2f58,},Annotations:map[string]string{io.kubernetes.container.hash: 7731edf5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82ec9b207bfd3ffea2c95fc2e155c8e565236b4b1b904baaab96e556de26fe77,PodSandboxId:58f953c6498c815f64e7e72954faa60fe9e485ad07173c9fe959e57055ceffec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241147319076750,Labels:map[string]string{io.kubernetes.container.name: kube-
controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac1aa9fed9b42ec68485013aa64c8d2,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64e3a55717be3283a0695169654d5d905bfebf0b9f499df4ed4bf6766596ea1,PodSandboxId:e5d77122fae676106ac8f266d61cf0116d8b98a826602e4cac2ad55e8ef3a286,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241147237628865,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:406735d310893ae4eeec2b9b969cff1442005eab3956fac313fbf5545470e815,PodSandboxId:f6d12725dc8e4ef65263bba54f4f8d6cea4b89d3899c69d1156a4e7191ba39f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721240863934863408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bce286c5-3c98-49e2-8adf-ff22f77fda6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.704405532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66946962-7021-4e16-b10f-e034b2c69c72 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.704496520Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66946962-7021-4e16-b10f-e034b2c69c72 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.706502259Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96a6dee6-51b1-40fc-8c8b-743a9c9ca251 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.706896610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241171706871597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96a6dee6-51b1-40fc-8c8b-743a9c9ca251 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.707402657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ca2bfc4-7b7b-4655-b12e-1f2f7fe37e86 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.707479293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ca2bfc4-7b7b-4655-b12e-1f2f7fe37e86 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.707696458Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b1e482d1c8ef316e529644708a390e6e7f46dc5f9b2a3272f391471372039b,PodSandboxId:44ffd8512f7ade9b1821a6405a025b08a7faade2182bb67cbf7ed33b961a60ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168292272767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fds59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753107be-ccbf-431f-8a2e-e79bdb96f7c4,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9edc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922a7e0f262a8282d9e72f42fdeca7478428c833dbeb2a9b95a5738d1ef95e69,PodSandboxId:4c4408303c67164222da84a7bf59e287a06e4fc94ed1085a051669523e55e20d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168214407421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-884nf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 27cac9c3-742d-416c-a281-0aaf074fbd3a,},Annotations:map[string]string{io.kubernetes.container.hash: ed3513db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26db6113dcb1ff79fcd77d6d39b46c69b8761312bf5238a27ffd2e11eda174f7,PodSandboxId:06c5f342d1f79b4c2d91bf5328ab371a7f226ef280618ba0f1d3990c7d0c6c34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,Cre
atedAt:1721241167795198780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9svn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 4dd93799,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b08dd954d955c74b4b84f21646fa33facb15fc2c1e53a68975c3187779cc6a29,PodSandboxId:ad26dc0b8c7874b3a7bbc2e23810502f05e3201a2756f197b8bd1d96e6efa775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241147345888833,La
bels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6af54c2061de253ace2de68751df8da5,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1a2aa0f51d772dc4abbce1d3004d6b52f7961de71561a8776ab799c79b8df0,PodSandboxId:560487aab164542ad8417325db6c3c052cb855002b2abbf25560b824f4736d5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241147342168856,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fa797dfdfb736f9e861ba1561f2f58,},Annotations:map[string]string{io.kubernetes.container.hash: 7731edf5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82ec9b207bfd3ffea2c95fc2e155c8e565236b4b1b904baaab96e556de26fe77,PodSandboxId:58f953c6498c815f64e7e72954faa60fe9e485ad07173c9fe959e57055ceffec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241147319076750,Labels:map[string]string{io.kubernetes.container.name: kube-
controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac1aa9fed9b42ec68485013aa64c8d2,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64e3a55717be3283a0695169654d5d905bfebf0b9f499df4ed4bf6766596ea1,PodSandboxId:e5d77122fae676106ac8f266d61cf0116d8b98a826602e4cac2ad55e8ef3a286,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241147237628865,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:406735d310893ae4eeec2b9b969cff1442005eab3956fac313fbf5545470e815,PodSandboxId:f6d12725dc8e4ef65263bba54f4f8d6cea4b89d3899c69d1156a4e7191ba39f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721240863934863408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ca2bfc4-7b7b-4655-b12e-1f2f7fe37e86 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.741774215Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7604cb0-e0fe-470f-b3f5-a577ce5d73c9 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.741860019Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7604cb0-e0fe-470f-b3f5-a577ce5d73c9 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.743049347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f0dd658-dcde-47b7-9e79-d2726de2ce6a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.743624709Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241171743595133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f0dd658-dcde-47b7-9e79-d2726de2ce6a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.744425073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcce0b5d-131b-4abd-a416-8dd1fa289cd7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.744489389Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bcce0b5d-131b-4abd-a416-8dd1fa289cd7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.744696517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b1e482d1c8ef316e529644708a390e6e7f46dc5f9b2a3272f391471372039b,PodSandboxId:44ffd8512f7ade9b1821a6405a025b08a7faade2182bb67cbf7ed33b961a60ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168292272767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fds59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753107be-ccbf-431f-8a2e-e79bdb96f7c4,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9edc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922a7e0f262a8282d9e72f42fdeca7478428c833dbeb2a9b95a5738d1ef95e69,PodSandboxId:4c4408303c67164222da84a7bf59e287a06e4fc94ed1085a051669523e55e20d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168214407421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-884nf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 27cac9c3-742d-416c-a281-0aaf074fbd3a,},Annotations:map[string]string{io.kubernetes.container.hash: ed3513db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26db6113dcb1ff79fcd77d6d39b46c69b8761312bf5238a27ffd2e11eda174f7,PodSandboxId:06c5f342d1f79b4c2d91bf5328ab371a7f226ef280618ba0f1d3990c7d0c6c34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,Cre
atedAt:1721241167795198780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9svn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 4dd93799,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b08dd954d955c74b4b84f21646fa33facb15fc2c1e53a68975c3187779cc6a29,PodSandboxId:ad26dc0b8c7874b3a7bbc2e23810502f05e3201a2756f197b8bd1d96e6efa775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241147345888833,La
bels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6af54c2061de253ace2de68751df8da5,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1a2aa0f51d772dc4abbce1d3004d6b52f7961de71561a8776ab799c79b8df0,PodSandboxId:560487aab164542ad8417325db6c3c052cb855002b2abbf25560b824f4736d5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241147342168856,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fa797dfdfb736f9e861ba1561f2f58,},Annotations:map[string]string{io.kubernetes.container.hash: 7731edf5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82ec9b207bfd3ffea2c95fc2e155c8e565236b4b1b904baaab96e556de26fe77,PodSandboxId:58f953c6498c815f64e7e72954faa60fe9e485ad07173c9fe959e57055ceffec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241147319076750,Labels:map[string]string{io.kubernetes.container.name: kube-
controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac1aa9fed9b42ec68485013aa64c8d2,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64e3a55717be3283a0695169654d5d905bfebf0b9f499df4ed4bf6766596ea1,PodSandboxId:e5d77122fae676106ac8f266d61cf0116d8b98a826602e4cac2ad55e8ef3a286,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241147237628865,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:406735d310893ae4eeec2b9b969cff1442005eab3956fac313fbf5545470e815,PodSandboxId:f6d12725dc8e4ef65263bba54f4f8d6cea4b89d3899c69d1156a4e7191ba39f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721240863934863408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bcce0b5d-131b-4abd-a416-8dd1fa289cd7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.781203730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20da6a3e-a831-4e81-ada0-2f70428d29bb name=/runtime.v1.RuntimeService/Version
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.781331911Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20da6a3e-a831-4e81-ada0-2f70428d29bb name=/runtime.v1.RuntimeService/Version
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.782969000Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9dbcf7dd-dfb0-467a-adb2-9ada0fd2aebf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.783383606Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241171783358712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dbcf7dd-dfb0-467a-adb2-9ada0fd2aebf name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.783847316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6854510c-26a9-462c-9051-27ddc3bf983f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.783950635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6854510c-26a9-462c-9051-27ddc3bf983f name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:51 pause-371172 crio[2861]: time="2024-07-17 18:32:51.784191343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b1e482d1c8ef316e529644708a390e6e7f46dc5f9b2a3272f391471372039b,PodSandboxId:44ffd8512f7ade9b1821a6405a025b08a7faade2182bb67cbf7ed33b961a60ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168292272767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fds59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753107be-ccbf-431f-8a2e-e79bdb96f7c4,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9edc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922a7e0f262a8282d9e72f42fdeca7478428c833dbeb2a9b95a5738d1ef95e69,PodSandboxId:4c4408303c67164222da84a7bf59e287a06e4fc94ed1085a051669523e55e20d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168214407421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-884nf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 27cac9c3-742d-416c-a281-0aaf074fbd3a,},Annotations:map[string]string{io.kubernetes.container.hash: ed3513db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26db6113dcb1ff79fcd77d6d39b46c69b8761312bf5238a27ffd2e11eda174f7,PodSandboxId:06c5f342d1f79b4c2d91bf5328ab371a7f226ef280618ba0f1d3990c7d0c6c34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,Cre
atedAt:1721241167795198780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9svn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 4dd93799,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b08dd954d955c74b4b84f21646fa33facb15fc2c1e53a68975c3187779cc6a29,PodSandboxId:ad26dc0b8c7874b3a7bbc2e23810502f05e3201a2756f197b8bd1d96e6efa775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241147345888833,La
bels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6af54c2061de253ace2de68751df8da5,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1a2aa0f51d772dc4abbce1d3004d6b52f7961de71561a8776ab799c79b8df0,PodSandboxId:560487aab164542ad8417325db6c3c052cb855002b2abbf25560b824f4736d5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241147342168856,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fa797dfdfb736f9e861ba1561f2f58,},Annotations:map[string]string{io.kubernetes.container.hash: 7731edf5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82ec9b207bfd3ffea2c95fc2e155c8e565236b4b1b904baaab96e556de26fe77,PodSandboxId:58f953c6498c815f64e7e72954faa60fe9e485ad07173c9fe959e57055ceffec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241147319076750,Labels:map[string]string{io.kubernetes.container.name: kube-
controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac1aa9fed9b42ec68485013aa64c8d2,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64e3a55717be3283a0695169654d5d905bfebf0b9f499df4ed4bf6766596ea1,PodSandboxId:e5d77122fae676106ac8f266d61cf0116d8b98a826602e4cac2ad55e8ef3a286,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241147237628865,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:406735d310893ae4eeec2b9b969cff1442005eab3956fac313fbf5545470e815,PodSandboxId:f6d12725dc8e4ef65263bba54f4f8d6cea4b89d3899c69d1156a4e7191ba39f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721240863934863408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6854510c-26a9-462c-9051-27ddc3bf983f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e6b1e482d1c8e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   0                   44ffd8512f7ad       coredns-7db6d8ff4d-fds59
	922a7e0f262a8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   0                   4c4408303c671       coredns-7db6d8ff4d-884nf
	26db6113dcb1f       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   4 seconds ago       Running             kube-proxy                0                   06c5f342d1f79       kube-proxy-m9svn
	b08dd954d955c       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   24 seconds ago      Running             kube-scheduler            3                   ad26dc0b8c787       kube-scheduler-pause-371172
	5d1a2aa0f51d7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago      Running             etcd                      4                   560487aab1645       etcd-pause-371172
	82ec9b207bfd3       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   24 seconds ago      Running             kube-controller-manager   3                   58f953c6498c8       kube-controller-manager-pause-371172
	b64e3a55717be       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   24 seconds ago      Running             kube-apiserver            4                   e5d77122fae67       kube-apiserver-pause-371172
	406735d310893       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   5 minutes ago       Exited              kube-apiserver            3                   f6d12725dc8e4       kube-apiserver-pause-371172
	
	
	==> coredns [922a7e0f262a8282d9e72f42fdeca7478428c833dbeb2a9b95a5738d1ef95e69] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e6b1e482d1c8ef316e529644708a390e6e7f46dc5f9b2a3272f391471372039b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               pause-371172
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-371172
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=pause-371172
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_32_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:32:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-371172
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:32:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:32:32 +0000   Wed, 17 Jul 2024 18:32:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:32:32 +0000   Wed, 17 Jul 2024 18:32:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:32:32 +0000   Wed, 17 Jul 2024 18:32:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:32:32 +0000   Wed, 17 Jul 2024 18:32:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.21
	  Hostname:    pause-371172
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 a804fcb9ba4a45f09b8de2e5b44edb1b
	  System UUID:                a804fcb9-ba4a-45f0-9b8d-e2e5b44edb1b
	  Boot ID:                    65b9b303-293e-45ff-9c83-dc6d6afb7884
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-884nf                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5s
	  kube-system                 coredns-7db6d8ff4d-fds59                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5s
	  kube-system                 etcd-pause-371172                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         20s
	  kube-system                 kube-apiserver-pause-371172             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 kube-controller-manager-pause-371172    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 kube-proxy-m9svn                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-scheduler-pause-371172             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (12%!)(MISSING)  340Mi (17%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-371172 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-371172 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-371172 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20s                kubelet          Node pause-371172 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s                kubelet          Node pause-371172 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s                kubelet          Node pause-371172 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6s                 node-controller  Node pause-371172 event: Registered Node pause-371172 in Controller
	
	
	==> dmesg <==
	[  +4.191979] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +5.159806] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.068107] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.020814] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.081460] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.293657] systemd-fstab-generator[1517]: Ignoring "noauto" option for root device
	[  +0.110867] kauditd_printk_skb: 21 callbacks suppressed
	[Jul17 18:25] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.081167] systemd-fstab-generator[2624]: Ignoring "noauto" option for root device
	[  +0.188988] systemd-fstab-generator[2657]: Ignoring "noauto" option for root device
	[  +0.217452] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.204109] systemd-fstab-generator[2692]: Ignoring "noauto" option for root device
	[  +0.403525] systemd-fstab-generator[2723]: Ignoring "noauto" option for root device
	[Jul17 18:27] systemd-fstab-generator[2968]: Ignoring "noauto" option for root device
	[  +0.078165] kauditd_printk_skb: 174 callbacks suppressed
	[  +5.973819] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.452425] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.507164] systemd-fstab-generator[3722]: Ignoring "noauto" option for root device
	[  +0.744433] kauditd_printk_skb: 23 callbacks suppressed
	[Jul17 18:32] kauditd_printk_skb: 5 callbacks suppressed
	[ +17.493888] systemd-fstab-generator[5353]: Ignoring "noauto" option for root device
	[  +6.052760] systemd-fstab-generator[5682]: Ignoring "noauto" option for root device
	[  +0.077055] kauditd_printk_skb: 63 callbacks suppressed
	[ +15.676874] systemd-fstab-generator[5894]: Ignoring "noauto" option for root device
	[  +0.090241] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [5d1a2aa0f51d772dc4abbce1d3004d6b52f7961de71561a8776ab799c79b8df0] <==
	{"level":"info","ts":"2024-07-17T18:32:27.686609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab switched to configuration voters=(7747864092090557611)"}
	{"level":"info","ts":"2024-07-17T18:32:27.686797Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f04757488c993a3","local-member-id":"6b85f157810fe4ab","added-peer-id":"6b85f157810fe4ab","added-peer-peer-urls":["https://192.168.50.21:2380"]}
	{"level":"info","ts":"2024-07-17T18:32:27.725953Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T18:32:27.726138Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6b85f157810fe4ab","initial-advertise-peer-urls":["https://192.168.50.21:2380"],"listen-peer-urls":["https://192.168.50.21:2380"],"advertise-client-urls":["https://192.168.50.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.21:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T18:32:27.726168Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T18:32:27.726309Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.21:2380"}
	{"level":"info","ts":"2024-07-17T18:32:27.726325Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.21:2380"}
	{"level":"info","ts":"2024-07-17T18:32:27.745281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T18:32:27.74532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T18:32:27.745338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab received MsgPreVoteResp from 6b85f157810fe4ab at term 1"}
	{"level":"info","ts":"2024-07-17T18:32:27.745349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:32:27.745354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab received MsgVoteResp from 6b85f157810fe4ab at term 2"}
	{"level":"info","ts":"2024-07-17T18:32:27.745362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab became leader at term 2"}
	{"level":"info","ts":"2024-07-17T18:32:27.745369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b85f157810fe4ab elected leader 6b85f157810fe4ab at term 2"}
	{"level":"info","ts":"2024-07-17T18:32:27.749379Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:32:27.751531Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6b85f157810fe4ab","local-member-attributes":"{Name:pause-371172 ClientURLs:[https://192.168.50.21:2379]}","request-path":"/0/members/6b85f157810fe4ab/attributes","cluster-id":"6f04757488c993a3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:32:27.751768Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:32:27.755208Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:32:27.75531Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T18:32:27.755445Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:32:27.756647Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f04757488c993a3","local-member-id":"6b85f157810fe4ab","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:32:27.758365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:32:27.75846Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:32:27.772945Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.21:2379"}
	{"level":"info","ts":"2024-07-17T18:32:27.773195Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:32:52 up 8 min,  0 users,  load average: 1.41, 0.61, 0.30
	Linux pause-371172 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [406735d310893ae4eeec2b9b969cff1442005eab3956fac313fbf5545470e815] <==
	W0717 18:32:23.246800       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.301172       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.306281       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.338399       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.356652       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.360321       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.372871       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.434763       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.480717       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.532284       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.554339       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.555599       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.573558       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.587512       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.592372       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.636154       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.675087       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.678982       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.680371       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.692810       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.698627       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.701153       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.713521       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.817426       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.824387       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b64e3a55717be3283a0695169654d5d905bfebf0b9f499df4ed4bf6766596ea1] <==
	I0717 18:32:29.858429       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 18:32:29.858437       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 18:32:29.858443       1 cache.go:39] Caches are synced for autoregister controller
	E0717 18:32:29.882698       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0717 18:32:29.887079       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0717 18:32:29.896652       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 18:32:29.906284       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 18:32:29.906362       1 policy_source.go:224] refreshing policies
	I0717 18:32:29.931718       1 controller.go:615] quota admission added evaluator for: namespaces
	I0717 18:32:30.103510       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 18:32:30.721559       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 18:32:30.726005       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 18:32:30.726034       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 18:32:31.284411       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 18:32:31.323719       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 18:32:31.456449       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 18:32:31.470761       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.21]
	I0717 18:32:31.471673       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 18:32:31.484375       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 18:32:31.802215       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 18:32:32.200900       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 18:32:32.213102       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 18:32:32.221493       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 18:32:46.914208       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 18:32:47.363716       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [82ec9b207bfd3ffea2c95fc2e155c8e565236b4b1b904baaab96e556de26fe77] <==
	I0717 18:32:46.414294       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 18:32:46.419308       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 18:32:46.420628       1 shared_informer.go:320] Caches are synced for node
	I0717 18:32:46.420701       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0717 18:32:46.420738       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0717 18:32:46.420765       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0717 18:32:46.420787       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0717 18:32:46.428872       1 shared_informer.go:320] Caches are synced for PV protection
	I0717 18:32:46.437530       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="pause-371172" podCIDRs=["10.244.0.0/24"]
	I0717 18:32:46.464568       1 shared_informer.go:320] Caches are synced for namespace
	I0717 18:32:46.565351       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 18:32:46.565826       1 shared_informer.go:320] Caches are synced for attach detach
	I0717 18:32:46.612086       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0717 18:32:47.044559       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 18:32:47.061288       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 18:32:47.061320       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 18:32:47.596349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="674.93572ms"
	I0717 18:32:47.612328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.852255ms"
	I0717 18:32:47.619450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.551µs"
	I0717 18:32:47.626492       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.893µs"
	I0717 18:32:49.200776       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="150.216µs"
	I0717 18:32:49.229844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="11.036747ms"
	I0717 18:32:49.232747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="252.969µs"
	I0717 18:32:49.258068       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.132606ms"
	I0717 18:32:49.258669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="128.832µs"
	
	
	==> kube-proxy [26db6113dcb1ff79fcd77d6d39b46c69b8761312bf5238a27ffd2e11eda174f7] <==
	I0717 18:32:47.973451       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:32:47.995721       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.21"]
	I0717 18:32:48.069590       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:32:48.069633       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:32:48.069649       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:32:48.071892       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:32:48.072115       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:32:48.072133       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:32:48.073887       1 config.go:192] "Starting service config controller"
	I0717 18:32:48.073915       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:32:48.073940       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:32:48.073945       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:32:48.074854       1 config.go:319] "Starting node config controller"
	I0717 18:32:48.074922       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:32:48.174477       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:32:48.174546       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:32:48.175711       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b08dd954d955c74b4b84f21646fa33facb15fc2c1e53a68975c3187779cc6a29] <==
	W0717 18:32:29.836736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:32:29.836757       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:32:29.840683       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:32:29.840720       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:32:30.643070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:32:30.643123       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 18:32:30.721585       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:32:30.721666       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:32:30.750188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:32:30.750652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 18:32:30.810024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:32:30.810157       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:32:30.835514       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:32:30.835623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 18:32:30.910117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:32:30.910279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:32:30.936393       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:32:30.937381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 18:32:31.072785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:32:31.072892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 18:32:31.080318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:32:31.080395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:32:31.086975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:32:31.087049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0717 18:32:32.931522       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 18:32:32 pause-371172 kubelet[5689]: I0717 18:32:32.371815    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4ac1aa9fed9b42ec68485013aa64c8d2-kubeconfig\") pod \"kube-controller-manager-pause-371172\" (UID: \"4ac1aa9fed9b42ec68485013aa64c8d2\") " pod="kube-system/kube-controller-manager-pause-371172"
	Jul 17 18:32:32 pause-371172 kubelet[5689]: I0717 18:32:32.371829    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6af54c2061de253ace2de68751df8da5-kubeconfig\") pod \"kube-scheduler-pause-371172\" (UID: \"6af54c2061de253ace2de68751df8da5\") " pod="kube-system/kube-scheduler-pause-371172"
	Jul 17 18:32:32 pause-371172 kubelet[5689]: I0717 18:32:32.371841    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/52fa797dfdfb736f9e861ba1561f2f58-etcd-certs\") pod \"etcd-pause-371172\" (UID: \"52fa797dfdfb736f9e861ba1561f2f58\") " pod="kube-system/etcd-pause-371172"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: I0717 18:32:33.048281    5689 apiserver.go:52] "Watching apiserver"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: I0717 18:32:33.070450    5689 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: E0717 18:32:33.149534    5689 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-371172\" already exists" pod="kube-system/kube-controller-manager-pause-371172"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: E0717 18:32:33.150377    5689 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-371172\" already exists" pod="kube-system/kube-apiserver-pause-371172"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: I0717 18:32:33.167751    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-371172" podStartSLOduration=1.167717223 podStartE2EDuration="1.167717223s" podCreationTimestamp="2024-07-17 18:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:33.157701122 +0000 UTC m=+1.191363528" watchObservedRunningTime="2024-07-17 18:32:33.167717223 +0000 UTC m=+1.201379631"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: I0717 18:32:33.178509    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-371172" podStartSLOduration=1.178492126 podStartE2EDuration="1.178492126s" podCreationTimestamp="2024-07-17 18:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:33.168310041 +0000 UTC m=+1.201972442" watchObservedRunningTime="2024-07-17 18:32:33.178492126 +0000 UTC m=+1.212154532"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: I0717 18:32:33.189707    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-371172" podStartSLOduration=1.18969162 podStartE2EDuration="1.18969162s" podCreationTimestamp="2024-07-17 18:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:33.178977418 +0000 UTC m=+1.212639809" watchObservedRunningTime="2024-07-17 18:32:33.18969162 +0000 UTC m=+1.223354022"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: I0717 18:32:33.190379    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-371172" podStartSLOduration=1.19036118 podStartE2EDuration="1.19036118s" podCreationTimestamp="2024-07-17 18:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:33.190218444 +0000 UTC m=+1.223880835" watchObservedRunningTime="2024-07-17 18:32:33.19036118 +0000 UTC m=+1.224023597"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.383821    5689 topology_manager.go:215] "Topology Admit Handler" podUID="9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e" podNamespace="kube-system" podName="kube-proxy-m9svn"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.473816    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e-lib-modules\") pod \"kube-proxy-m9svn\" (UID: \"9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e\") " pod="kube-system/kube-proxy-m9svn"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.473867    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e-kube-proxy\") pod \"kube-proxy-m9svn\" (UID: \"9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e\") " pod="kube-system/kube-proxy-m9svn"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.473888    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e-xtables-lock\") pod \"kube-proxy-m9svn\" (UID: \"9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e\") " pod="kube-system/kube-proxy-m9svn"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.473904    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6glv\" (UniqueName: \"kubernetes.io/projected/9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e-kube-api-access-x6glv\") pod \"kube-proxy-m9svn\" (UID: \"9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e\") " pod="kube-system/kube-proxy-m9svn"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.544001    5689 topology_manager.go:215] "Topology Admit Handler" podUID="27cac9c3-742d-416c-a281-0aaf074fbd3a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-884nf"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.574603    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27cac9c3-742d-416c-a281-0aaf074fbd3a-config-volume\") pod \"coredns-7db6d8ff4d-884nf\" (UID: \"27cac9c3-742d-416c-a281-0aaf074fbd3a\") " pod="kube-system/coredns-7db6d8ff4d-884nf"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.574652    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zpgd\" (UniqueName: \"kubernetes.io/projected/27cac9c3-742d-416c-a281-0aaf074fbd3a-kube-api-access-8zpgd\") pod \"coredns-7db6d8ff4d-884nf\" (UID: \"27cac9c3-742d-416c-a281-0aaf074fbd3a\") " pod="kube-system/coredns-7db6d8ff4d-884nf"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.587531    5689 topology_manager.go:215] "Topology Admit Handler" podUID="753107be-ccbf-431f-8a2e-e79bdb96f7c4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fds59"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.675621    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvjl\" (UniqueName: \"kubernetes.io/projected/753107be-ccbf-431f-8a2e-e79bdb96f7c4-kube-api-access-vpvjl\") pod \"coredns-7db6d8ff4d-fds59\" (UID: \"753107be-ccbf-431f-8a2e-e79bdb96f7c4\") " pod="kube-system/coredns-7db6d8ff4d-fds59"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.675861    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/753107be-ccbf-431f-8a2e-e79bdb96f7c4-config-volume\") pod \"coredns-7db6d8ff4d-fds59\" (UID: \"753107be-ccbf-431f-8a2e-e79bdb96f7c4\") " pod="kube-system/coredns-7db6d8ff4d-fds59"
	Jul 17 18:32:49 pause-371172 kubelet[5689]: I0717 18:32:49.197719    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m9svn" podStartSLOduration=2.197699881 podStartE2EDuration="2.197699881s" podCreationTimestamp="2024-07-17 18:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:48.203973941 +0000 UTC m=+16.237636350" watchObservedRunningTime="2024-07-17 18:32:49.197699881 +0000 UTC m=+17.231362282"
	Jul 17 18:32:49 pause-371172 kubelet[5689]: I0717 18:32:49.216502    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fds59" podStartSLOduration=2.216480674 podStartE2EDuration="2.216480674s" podCreationTimestamp="2024-07-17 18:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:49.198948443 +0000 UTC m=+17.232610852" watchObservedRunningTime="2024-07-17 18:32:49.216480674 +0000 UTC m=+17.250143083"
	Jul 17 18:32:49 pause-371172 kubelet[5689]: I0717 18:32:49.242969    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-884nf" podStartSLOduration=2.242911771 podStartE2EDuration="2.242911771s" podCreationTimestamp="2024-07-17 18:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:49.21760968 +0000 UTC m=+17.251272090" watchObservedRunningTime="2024-07-17 18:32:49.242911771 +0000 UTC m=+17.276574177"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-371172 -n pause-371172
helpers_test.go:261: (dbg) Run:  kubectl --context pause-371172 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-371172 -n pause-371172
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-371172 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-371172 logs -n 25: (1.116015759s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo journalctl -xeu kubelet                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo cat                                              |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo cat                                              |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC |                     |
	|         | sudo systemctl status docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo cat                                              |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC |                     |
	|         | sudo docker system info                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC |                     |
	|         | sudo systemctl status                                 |                           |         |         |                     |                     |
	|         | cri-docker --all --full                               |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat cri-docker                         |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476 sudo cat                 | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf  |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476 sudo cat                 | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo cri-dockerd --version                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC |                     |
	|         | sudo systemctl status                                 |                           |         |         |                     |                     |
	|         | containerd --all --full                               |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat containerd                         |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476 sudo cat                 | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | /lib/systemd/system/containerd.service                |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo cat                                              |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo containerd config dump                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl status crio                            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat crio                               |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo find /etc/crio -type f                           |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                         |                           |         |         |                     |                     |
	|         | \;                                                    |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo crio config                                      |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-235476                          | enable-default-cni-235476 | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	| start   | -p embed-certs-527415                                 | embed-certs-527415        | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:32 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                          |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-527415           | embed-certs-527415        | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p embed-certs-527415                                 | embed-certs-527415        | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:31:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:31:22.639596   77994 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:31:22.639939   77994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:31:22.639964   77994 out.go:304] Setting ErrFile to fd 2...
	I0717 18:31:22.639998   77994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:31:22.640332   77994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:31:22.640886   77994 out.go:298] Setting JSON to false
	I0717 18:31:22.641905   77994 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8026,"bootTime":1721233057,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:31:22.641959   77994 start.go:139] virtualization: kvm guest
	I0717 18:31:22.644248   77994 out.go:177] * [embed-certs-527415] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:31:22.645764   77994 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:31:22.645790   77994 notify.go:220] Checking for updates...
	I0717 18:31:22.648446   77994 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:31:22.649650   77994 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:31:22.650971   77994 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:31:22.652267   77994 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:31:22.653530   77994 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:31:22.655045   77994 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:31:22.655130   77994 config.go:182] Loaded profile config "old-k8s-version-019549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 18:31:22.655243   77994 config.go:182] Loaded profile config "pause-371172": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:31:22.655337   77994 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:31:22.691260   77994 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 18:31:22.692498   77994 start.go:297] selected driver: kvm2
	I0717 18:31:22.692523   77994 start.go:901] validating driver "kvm2" against <nil>
	I0717 18:31:22.692538   77994 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:31:22.693340   77994 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:31:22.693419   77994 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:31:22.711624   77994 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:31:22.711696   77994 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 18:31:22.711928   77994 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:31:22.712003   77994 cni.go:84] Creating CNI manager for ""
	I0717 18:31:22.712017   77994 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:31:22.712024   77994 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 18:31:22.712120   77994 start.go:340] cluster config:
	{Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:31:22.712224   77994 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:31:22.713980   77994 out.go:177] * Starting "embed-certs-527415" primary control-plane node in "embed-certs-527415" cluster
	I0717 18:31:21.863073   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:21.863649   76391 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:31:21.863670   76391 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:31:21.863604   76525 retry.go:31] will retry after 5.420544594s: waiting for machine to come up
	I0717 18:31:21.953494   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:24.451856   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:22.715361   77994 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:31:22.715404   77994 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:31:22.715414   77994 cache.go:56] Caching tarball of preloaded images
	I0717 18:31:22.715574   77994 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:31:22.715594   77994 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:31:22.715717   77994 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/config.json ...
	I0717 18:31:22.715742   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/config.json: {Name:mk76bd6ccc31581a1abdd4a4a1a2d8d35752fa92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:22.715892   77994 start.go:360] acquireMachinesLock for embed-certs-527415: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:31:27.288475   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:27.289011   76391 main.go:141] libmachine: (no-preload-066175) Found IP for machine: 192.168.72.216
	I0717 18:31:27.289034   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has current primary IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:27.289041   76391 main.go:141] libmachine: (no-preload-066175) Reserving static IP address...
	I0717 18:31:27.289508   76391 main.go:141] libmachine: (no-preload-066175) DBG | unable to find host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"} in network mk-no-preload-066175
	I0717 18:31:27.369012   76391 main.go:141] libmachine: (no-preload-066175) Reserved static IP address: 192.168.72.216
	I0717 18:31:27.369041   76391 main.go:141] libmachine: (no-preload-066175) DBG | Getting to WaitForSSH function...
	I0717 18:31:27.369050   76391 main.go:141] libmachine: (no-preload-066175) Waiting for SSH to be available...
	I0717 18:31:27.371780   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:27.372105   76391 main.go:141] libmachine: (no-preload-066175) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175
	I0717 18:31:27.372132   76391 main.go:141] libmachine: (no-preload-066175) DBG | unable to find defined IP address of network mk-no-preload-066175 interface with MAC address 52:54:00:72:a5:17
	I0717 18:31:27.372149   76391 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH client type: external
	I0717 18:31:27.372193   76391 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa (-rw-------)
	I0717 18:31:27.372244   76391 main.go:141] libmachine: (no-preload-066175) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:31:27.372261   76391 main.go:141] libmachine: (no-preload-066175) DBG | About to run SSH command:
	I0717 18:31:27.372276   76391 main.go:141] libmachine: (no-preload-066175) DBG | exit 0
	I0717 18:31:27.376589   76391 main.go:141] libmachine: (no-preload-066175) DBG | SSH cmd err, output: exit status 255: 
	I0717 18:31:27.376606   76391 main.go:141] libmachine: (no-preload-066175) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0717 18:31:27.376614   76391 main.go:141] libmachine: (no-preload-066175) DBG | command : exit 0
	I0717 18:31:27.376623   76391 main.go:141] libmachine: (no-preload-066175) DBG | err     : exit status 255
	I0717 18:31:27.376635   76391 main.go:141] libmachine: (no-preload-066175) DBG | output  : 
	I0717 18:31:26.952582   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:29.452659   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:31.785428   77994 start.go:364] duration metric: took 9.069515129s to acquireMachinesLock for "embed-certs-527415"
	I0717 18:31:31.785493   77994 start.go:93] Provisioning new machine with config: &{Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:31:31.785610   77994 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 18:31:31.787821   77994 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 18:31:31.787997   77994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:31:31.788041   77994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:31:31.805247   77994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37115
	I0717 18:31:31.805669   77994 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:31:31.806215   77994 main.go:141] libmachine: Using API Version  1
	I0717 18:31:31.806239   77994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:31:31.806763   77994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:31:31.806991   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:31:31.807166   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:31.807327   77994 start.go:159] libmachine.API.Create for "embed-certs-527415" (driver="kvm2")
	I0717 18:31:31.807359   77994 client.go:168] LocalClient.Create starting
	I0717 18:31:31.807399   77994 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 18:31:31.807436   77994 main.go:141] libmachine: Decoding PEM data...
	I0717 18:31:31.807457   77994 main.go:141] libmachine: Parsing certificate...
	I0717 18:31:31.807524   77994 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 18:31:31.807548   77994 main.go:141] libmachine: Decoding PEM data...
	I0717 18:31:31.807567   77994 main.go:141] libmachine: Parsing certificate...
	I0717 18:31:31.807590   77994 main.go:141] libmachine: Running pre-create checks...
	I0717 18:31:31.807606   77994 main.go:141] libmachine: (embed-certs-527415) Calling .PreCreateCheck
	I0717 18:31:31.808014   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetConfigRaw
	I0717 18:31:31.808462   77994 main.go:141] libmachine: Creating machine...
	I0717 18:31:31.808479   77994 main.go:141] libmachine: (embed-certs-527415) Calling .Create
	I0717 18:31:31.808624   77994 main.go:141] libmachine: (embed-certs-527415) Creating KVM machine...
	I0717 18:31:31.809897   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found existing default KVM network
	I0717 18:31:31.811352   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:31.811169   78077 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4f:c7:84} reservation:<nil>}
	I0717 18:31:31.812075   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:31.812006   78077 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f3:32:5d} reservation:<nil>}
	I0717 18:31:31.813104   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:31.813019   78077 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a2fd0}
	I0717 18:31:31.813127   77994 main.go:141] libmachine: (embed-certs-527415) DBG | created network xml: 
	I0717 18:31:31.813140   77994 main.go:141] libmachine: (embed-certs-527415) DBG | <network>
	I0717 18:31:31.813149   77994 main.go:141] libmachine: (embed-certs-527415) DBG |   <name>mk-embed-certs-527415</name>
	I0717 18:31:31.813161   77994 main.go:141] libmachine: (embed-certs-527415) DBG |   <dns enable='no'/>
	I0717 18:31:31.813168   77994 main.go:141] libmachine: (embed-certs-527415) DBG |   
	I0717 18:31:31.813184   77994 main.go:141] libmachine: (embed-certs-527415) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0717 18:31:31.813194   77994 main.go:141] libmachine: (embed-certs-527415) DBG |     <dhcp>
	I0717 18:31:31.813221   77994 main.go:141] libmachine: (embed-certs-527415) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0717 18:31:31.813242   77994 main.go:141] libmachine: (embed-certs-527415) DBG |     </dhcp>
	I0717 18:31:31.813252   77994 main.go:141] libmachine: (embed-certs-527415) DBG |   </ip>
	I0717 18:31:31.813263   77994 main.go:141] libmachine: (embed-certs-527415) DBG |   
	I0717 18:31:31.813274   77994 main.go:141] libmachine: (embed-certs-527415) DBG | </network>
	I0717 18:31:31.813283   77994 main.go:141] libmachine: (embed-certs-527415) DBG | 
	I0717 18:31:31.818167   77994 main.go:141] libmachine: (embed-certs-527415) DBG | trying to create private KVM network mk-embed-certs-527415 192.168.61.0/24...
	I0717 18:31:31.890335   77994 main.go:141] libmachine: (embed-certs-527415) DBG | private KVM network mk-embed-certs-527415 192.168.61.0/24 created
	I0717 18:31:31.890370   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:31.890312   78077 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:31:31.890384   77994 main.go:141] libmachine: (embed-certs-527415) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415 ...
	I0717 18:31:31.890400   77994 main.go:141] libmachine: (embed-certs-527415) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 18:31:31.890484   77994 main.go:141] libmachine: (embed-certs-527415) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 18:31:32.148557   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:32.148429   78077 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa...
	I0717 18:31:32.296820   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:32.296676   78077 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/embed-certs-527415.rawdisk...
	I0717 18:31:32.296882   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Writing magic tar header
	I0717 18:31:32.296902   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Writing SSH key tar header
	I0717 18:31:32.296916   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:32.296808   78077 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415 ...
	I0717 18:31:32.296932   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415
	I0717 18:31:32.296971   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 18:31:32.296993   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:31:32.297010   77994 main.go:141] libmachine: (embed-certs-527415) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415 (perms=drwx------)
	I0717 18:31:32.297030   77994 main.go:141] libmachine: (embed-certs-527415) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:31:32.297044   77994 main.go:141] libmachine: (embed-certs-527415) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 18:31:32.297057   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 18:31:32.297067   77994 main.go:141] libmachine: (embed-certs-527415) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 18:31:32.297080   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:31:32.297111   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:31:32.297145   77994 main.go:141] libmachine: (embed-certs-527415) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:31:32.297159   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Checking permissions on dir: /home
	I0717 18:31:32.297175   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Skipping /home - not owner
	I0717 18:31:32.297204   77994 main.go:141] libmachine: (embed-certs-527415) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:31:32.297220   77994 main.go:141] libmachine: (embed-certs-527415) Creating domain...
	I0717 18:31:32.298269   77994 main.go:141] libmachine: (embed-certs-527415) define libvirt domain using xml: 
	I0717 18:31:32.298285   77994 main.go:141] libmachine: (embed-certs-527415) <domain type='kvm'>
	I0717 18:31:32.298302   77994 main.go:141] libmachine: (embed-certs-527415)   <name>embed-certs-527415</name>
	I0717 18:31:32.298311   77994 main.go:141] libmachine: (embed-certs-527415)   <memory unit='MiB'>2200</memory>
	I0717 18:31:32.298321   77994 main.go:141] libmachine: (embed-certs-527415)   <vcpu>2</vcpu>
	I0717 18:31:32.298332   77994 main.go:141] libmachine: (embed-certs-527415)   <features>
	I0717 18:31:32.298344   77994 main.go:141] libmachine: (embed-certs-527415)     <acpi/>
	I0717 18:31:32.298355   77994 main.go:141] libmachine: (embed-certs-527415)     <apic/>
	I0717 18:31:32.298363   77994 main.go:141] libmachine: (embed-certs-527415)     <pae/>
	I0717 18:31:32.298376   77994 main.go:141] libmachine: (embed-certs-527415)     
	I0717 18:31:32.298420   77994 main.go:141] libmachine: (embed-certs-527415)   </features>
	I0717 18:31:32.298448   77994 main.go:141] libmachine: (embed-certs-527415)   <cpu mode='host-passthrough'>
	I0717 18:31:32.298462   77994 main.go:141] libmachine: (embed-certs-527415)   
	I0717 18:31:32.298474   77994 main.go:141] libmachine: (embed-certs-527415)   </cpu>
	I0717 18:31:32.298486   77994 main.go:141] libmachine: (embed-certs-527415)   <os>
	I0717 18:31:32.298498   77994 main.go:141] libmachine: (embed-certs-527415)     <type>hvm</type>
	I0717 18:31:32.298511   77994 main.go:141] libmachine: (embed-certs-527415)     <boot dev='cdrom'/>
	I0717 18:31:32.298524   77994 main.go:141] libmachine: (embed-certs-527415)     <boot dev='hd'/>
	I0717 18:31:32.298551   77994 main.go:141] libmachine: (embed-certs-527415)     <bootmenu enable='no'/>
	I0717 18:31:32.298576   77994 main.go:141] libmachine: (embed-certs-527415)   </os>
	I0717 18:31:32.298595   77994 main.go:141] libmachine: (embed-certs-527415)   <devices>
	I0717 18:31:32.298614   77994 main.go:141] libmachine: (embed-certs-527415)     <disk type='file' device='cdrom'>
	I0717 18:31:32.298633   77994 main.go:141] libmachine: (embed-certs-527415)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/boot2docker.iso'/>
	I0717 18:31:32.298646   77994 main.go:141] libmachine: (embed-certs-527415)       <target dev='hdc' bus='scsi'/>
	I0717 18:31:32.298660   77994 main.go:141] libmachine: (embed-certs-527415)       <readonly/>
	I0717 18:31:32.298688   77994 main.go:141] libmachine: (embed-certs-527415)     </disk>
	I0717 18:31:32.298709   77994 main.go:141] libmachine: (embed-certs-527415)     <disk type='file' device='disk'>
	I0717 18:31:32.298726   77994 main.go:141] libmachine: (embed-certs-527415)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:31:32.298741   77994 main.go:141] libmachine: (embed-certs-527415)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/embed-certs-527415.rawdisk'/>
	I0717 18:31:32.298754   77994 main.go:141] libmachine: (embed-certs-527415)       <target dev='hda' bus='virtio'/>
	I0717 18:31:32.298767   77994 main.go:141] libmachine: (embed-certs-527415)     </disk>
	I0717 18:31:32.298778   77994 main.go:141] libmachine: (embed-certs-527415)     <interface type='network'>
	I0717 18:31:32.298796   77994 main.go:141] libmachine: (embed-certs-527415)       <source network='mk-embed-certs-527415'/>
	I0717 18:31:32.298810   77994 main.go:141] libmachine: (embed-certs-527415)       <model type='virtio'/>
	I0717 18:31:32.298822   77994 main.go:141] libmachine: (embed-certs-527415)     </interface>
	I0717 18:31:32.298836   77994 main.go:141] libmachine: (embed-certs-527415)     <interface type='network'>
	I0717 18:31:32.298846   77994 main.go:141] libmachine: (embed-certs-527415)       <source network='default'/>
	I0717 18:31:32.298866   77994 main.go:141] libmachine: (embed-certs-527415)       <model type='virtio'/>
	I0717 18:31:32.298885   77994 main.go:141] libmachine: (embed-certs-527415)     </interface>
	I0717 18:31:32.298899   77994 main.go:141] libmachine: (embed-certs-527415)     <serial type='pty'>
	I0717 18:31:32.298911   77994 main.go:141] libmachine: (embed-certs-527415)       <target port='0'/>
	I0717 18:31:32.298924   77994 main.go:141] libmachine: (embed-certs-527415)     </serial>
	I0717 18:31:32.298941   77994 main.go:141] libmachine: (embed-certs-527415)     <console type='pty'>
	I0717 18:31:32.298972   77994 main.go:141] libmachine: (embed-certs-527415)       <target type='serial' port='0'/>
	I0717 18:31:32.298994   77994 main.go:141] libmachine: (embed-certs-527415)     </console>
	I0717 18:31:32.299007   77994 main.go:141] libmachine: (embed-certs-527415)     <rng model='virtio'>
	I0717 18:31:32.299019   77994 main.go:141] libmachine: (embed-certs-527415)       <backend model='random'>/dev/random</backend>
	I0717 18:31:32.299039   77994 main.go:141] libmachine: (embed-certs-527415)     </rng>
	I0717 18:31:32.299058   77994 main.go:141] libmachine: (embed-certs-527415)     
	I0717 18:31:32.299077   77994 main.go:141] libmachine: (embed-certs-527415)     
	I0717 18:31:32.299093   77994 main.go:141] libmachine: (embed-certs-527415)   </devices>
	I0717 18:31:32.299104   77994 main.go:141] libmachine: (embed-certs-527415) </domain>
	I0717 18:31:32.299113   77994 main.go:141] libmachine: (embed-certs-527415) 
	I0717 18:31:32.303768   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:b7:0f:9b in network default
	I0717 18:31:32.304404   77994 main.go:141] libmachine: (embed-certs-527415) Ensuring networks are active...
	I0717 18:31:32.304423   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:32.305118   77994 main.go:141] libmachine: (embed-certs-527415) Ensuring network default is active
	I0717 18:31:32.305479   77994 main.go:141] libmachine: (embed-certs-527415) Ensuring network mk-embed-certs-527415 is active
	I0717 18:31:32.306020   77994 main.go:141] libmachine: (embed-certs-527415) Getting domain xml...
	I0717 18:31:32.306702   77994 main.go:141] libmachine: (embed-certs-527415) Creating domain...
	I0717 18:31:30.378080   76391 main.go:141] libmachine: (no-preload-066175) DBG | Getting to WaitForSSH function...
	I0717 18:31:30.381087   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.381517   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.381540   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.381651   76391 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH client type: external
	I0717 18:31:30.381676   76391 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa (-rw-------)
	I0717 18:31:30.381712   76391 main.go:141] libmachine: (no-preload-066175) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:31:30.381731   76391 main.go:141] libmachine: (no-preload-066175) DBG | About to run SSH command:
	I0717 18:31:30.381752   76391 main.go:141] libmachine: (no-preload-066175) DBG | exit 0
	I0717 18:31:30.509436   76391 main.go:141] libmachine: (no-preload-066175) DBG | SSH cmd err, output: <nil>: 
	I0717 18:31:30.509704   76391 main.go:141] libmachine: (no-preload-066175) KVM machine creation complete!
	I0717 18:31:30.510079   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetConfigRaw
	I0717 18:31:30.510684   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:30.510894   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:30.511044   76391 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:31:30.511059   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:31:30.512486   76391 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:31:30.512510   76391 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:31:30.512518   76391 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:31:30.512526   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:30.514844   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.515158   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.515209   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.515304   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:30.515476   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.515626   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.515769   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:30.515948   76391 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:30.516136   76391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:31:30.516146   76391 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:31:30.620056   76391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:31:30.620086   76391 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:31:30.620097   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:30.623128   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.623464   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.623492   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.623614   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:30.623804   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.623963   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.624081   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:30.624233   76391 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:30.624441   76391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:31:30.624455   76391 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:31:30.725315   76391 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:31:30.725443   76391 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:31:30.725460   76391 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:31:30.725471   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:31:30.725748   76391 buildroot.go:166] provisioning hostname "no-preload-066175"
	I0717 18:31:30.725779   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:31:30.725989   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:30.728433   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.728879   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.728912   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.729094   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:30.729263   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.729426   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.729560   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:30.729718   76391 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:30.729980   76391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:31:30.729998   76391 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-066175 && echo "no-preload-066175" | sudo tee /etc/hostname
	I0717 18:31:30.846184   76391 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-066175
	
	I0717 18:31:30.846217   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:30.849259   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.849550   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.849588   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.849752   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:30.849920   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.850083   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:30.850225   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:30.850401   76391 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:30.850556   76391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:31:30.850573   76391 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-066175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-066175/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-066175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:31:30.961590   76391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:31:30.961620   76391 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:31:30.961675   76391 buildroot.go:174] setting up certificates
	I0717 18:31:30.961690   76391 provision.go:84] configureAuth start
	I0717 18:31:30.961710   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:31:30.962027   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:31:30.964583   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.964991   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.965026   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.965165   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:30.967244   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.967701   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:30.967723   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:30.967915   76391 provision.go:143] copyHostCerts
	I0717 18:31:30.967989   76391 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:31:30.968001   76391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:31:30.968057   76391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:31:30.968147   76391 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:31:30.968155   76391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:31:30.968176   76391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:31:30.968238   76391 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:31:30.968245   76391 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:31:30.968261   76391 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:31:30.968317   76391 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.no-preload-066175 san=[127.0.0.1 192.168.72.216 localhost minikube no-preload-066175]
	I0717 18:31:31.143419   76391 provision.go:177] copyRemoteCerts
	I0717 18:31:31.143473   76391 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:31:31.143495   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:31.146046   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.146368   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.146391   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.146657   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:31.146862   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.147028   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:31.147173   76391 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:31:31.226668   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 18:31:31.248332   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:31:31.269415   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:31:31.290074   76391 provision.go:87] duration metric: took 328.36699ms to configureAuth
	I0717 18:31:31.290100   76391 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:31:31.290253   76391 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:31:31.290332   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:31.293271   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.293624   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.293655   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.293795   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:31.293946   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.294100   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.294210   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:31.294359   76391 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:31.294536   76391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:31:31.294557   76391 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:31:31.553507   76391 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:31:31.553536   76391 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:31:31.553546   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetURL
	I0717 18:31:31.554736   76391 main.go:141] libmachine: (no-preload-066175) DBG | Using libvirt version 6000000
	I0717 18:31:31.557056   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.557387   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.557417   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.557578   76391 main.go:141] libmachine: Docker is up and running!
	I0717 18:31:31.557594   76391 main.go:141] libmachine: Reticulating splines...
	I0717 18:31:31.557602   76391 client.go:171] duration metric: took 27.982696356s to LocalClient.Create
	I0717 18:31:31.557639   76391 start.go:167] duration metric: took 27.982768994s to libmachine.API.Create "no-preload-066175"
	I0717 18:31:31.557648   76391 start.go:293] postStartSetup for "no-preload-066175" (driver="kvm2")
	I0717 18:31:31.557663   76391 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:31:31.557686   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:31.557925   76391 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:31:31.557945   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:31.560136   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.560489   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.560518   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.560656   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:31.560870   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.561030   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:31.561147   76391 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:31:31.642798   76391 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:31:31.646461   76391 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:31:31.646482   76391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:31:31.646552   76391 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:31:31.646641   76391 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:31:31.646748   76391 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:31:31.655092   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:31:31.676712   76391 start.go:296] duration metric: took 119.050486ms for postStartSetup
	I0717 18:31:31.676757   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetConfigRaw
	I0717 18:31:31.677369   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:31:31.679689   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.679993   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.680022   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.680278   76391 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/config.json ...
	I0717 18:31:31.680472   76391 start.go:128] duration metric: took 28.126495252s to createHost
	I0717 18:31:31.680495   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:31.682709   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.683016   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.683037   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.683146   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:31.683412   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.683625   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.683827   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:31.684040   76391 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:31.684202   76391 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:31:31.684214   76391 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:31:31.785298   76391 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241091.773532814
	
	I0717 18:31:31.785315   76391 fix.go:216] guest clock: 1721241091.773532814
	I0717 18:31:31.785322   76391 fix.go:229] Guest: 2024-07-17 18:31:31.773532814 +0000 UTC Remote: 2024-07-17 18:31:31.680483267 +0000 UTC m=+37.507086707 (delta=93.049547ms)
	I0717 18:31:31.785340   76391 fix.go:200] guest clock delta is within tolerance: 93.049547ms
	I0717 18:31:31.785345   76391 start.go:83] releasing machines lock for "no-preload-066175", held for 28.23152162s
	I0717 18:31:31.785377   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:31.785674   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:31:31.788670   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.789059   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.789085   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.789279   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:31.789779   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:31.789980   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:31:31.790065   76391 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:31:31.790112   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:31.790315   76391 ssh_runner.go:195] Run: cat /version.json
	I0717 18:31:31.790344   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:31:31.792870   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.793115   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.793325   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.793352   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.793470   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:31.793591   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:31.793613   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:31.793647   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.793773   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:31.793818   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:31:31.793939   76391 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:31:31.794015   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:31:31.794152   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:31:31.794306   76391 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:31:31.869672   76391 ssh_runner.go:195] Run: systemctl --version
	I0717 18:31:31.905274   76391 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:31:32.072627   76391 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:31:32.078233   76391 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:31:32.078301   76391 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:31:32.092822   76391 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:31:32.092847   76391 start.go:495] detecting cgroup driver to use...
	I0717 18:31:32.092911   76391 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:31:32.108053   76391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:31:32.122303   76391 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:31:32.122369   76391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:31:32.136321   76391 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:31:32.150254   76391 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:31:32.273363   76391 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:31:32.422162   76391 docker.go:233] disabling docker service ...
	I0717 18:31:32.422221   76391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:31:32.436118   76391 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:31:32.448832   76391 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:31:32.585000   76391 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:31:32.708483   76391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:31:32.724100   76391 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:31:32.740515   76391 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 18:31:32.740590   76391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.753527   76391 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:31:32.753586   76391 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.765797   76391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.775331   76391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.785046   76391 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:31:32.794885   76391 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.804604   76391 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.820620   76391 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:32.830014   76391 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:31:32.839851   76391 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:31:32.839893   76391 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:31:32.853080   76391 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:31:32.862938   76391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:31:32.995893   76391 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:31:33.137303   76391 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:31:33.137370   76391 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:31:33.142293   76391 start.go:563] Will wait 60s for crictl version
	I0717 18:31:33.142339   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.145670   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:31:33.181362   76391 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:31:33.181435   76391 ssh_runner.go:195] Run: crio --version
	I0717 18:31:33.209245   76391 ssh_runner.go:195] Run: crio --version
	I0717 18:31:33.237648   76391 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 18:31:33.238943   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:31:33.242151   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:33.242633   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:31:33.242669   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:31:33.242985   76391 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 18:31:33.246924   76391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:31:33.259634   76391 kubeadm.go:883] updating cluster {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:31:33.259733   76391 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 18:31:33.259769   76391 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:31:33.293987   76391 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 18:31:33.294011   76391 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:31:33.294070   76391 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:31:33.294089   76391 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:31:33.294150   76391 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 18:31:33.294171   76391 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:31:33.294097   76391 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:31:33.294070   76391 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:31:33.294070   76391 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:31:33.294096   76391 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:31:33.295633   76391 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:31:33.295687   76391 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:31:33.295692   76391 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:31:33.295695   76391 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:31:33.295635   76391 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 18:31:33.295644   76391 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:31:33.295695   76391 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:31:33.295633   76391 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:31:33.477115   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:31:33.512387   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 18:31:33.515338   76391 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 18:31:33.515385   76391 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:31:33.515429   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.516497   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:31:33.526652   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:31:33.531476   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:31:33.544357   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 18:31:33.574814   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:31:33.578483   76391 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 18:31:33.578531   76391 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:31:33.578540   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:31:33.578585   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.638901   76391 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 18:31:33.638946   76391 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:31:33.638997   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.658595   76391 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 18:31:33.658643   76391 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:31:33.658694   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.683215   76391 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 18:31:33.683261   76391 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:31:33.683313   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.683221   76391 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0717 18:31:33.683378   76391 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I0717 18:31:33.683429   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.695172   76391 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 18:31:33.695216   76391 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:31:33.695262   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:33.696647   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 18:31:33.696735   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:31:33.696737   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 18:31:33.696782   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:31:33.696783   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:31:33.700105   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:31:33.700117   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0717 18:31:33.701036   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:31:33.716716   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0': No such file or directory
	I0717 18:31:33.716754   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (27889152 bytes)
	I0717 18:31:33.851087   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 18:31:33.851202   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:31:33.860422   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 18:31:33.860472   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 18:31:33.860526   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:31:33.860568   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:31:33.860629   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0717 18:31:33.860623   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 18:31:33.860672   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 18:31:33.860689   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.10
	I0717 18:31:33.860719   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:31:33.860729   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:31:33.894844   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.14-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.14-0': No such file or directory
	I0717 18:31:33.894894   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 --> /var/lib/minikube/images/etcd_3.5.14-0 (56932864 bytes)
	I0717 18:31:33.898993   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0717 18:31:33.899031   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0717 18:31:33.899033   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0': No such file or directory
	I0717 18:31:33.899037   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I0717 18:31:33.899088   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (18189312 bytes)
	I0717 18:31:33.899064   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (20081152 bytes)
	I0717 18:31:33.899152   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.31.0-beta.0': No such file or directory
	I0717 18:31:33.899176   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (30186496 bytes)
	I0717 18:31:33.899188   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0': No such file or directory
	I0717 18:31:33.899214   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (26149888 bytes)
	I0717 18:31:34.005437   76391 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10
	I0717 18:31:34.005498   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10
	I0717 18:31:34.101340   76391 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:31:31.955496   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:34.452829   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:33.589246   77994 main.go:141] libmachine: (embed-certs-527415) Waiting to get IP...
	I0717 18:31:33.590252   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:33.590812   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:33.590839   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:33.590791   78077 retry.go:31] will retry after 212.1232ms: waiting for machine to come up
	I0717 18:31:33.804446   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:33.805108   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:33.805141   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:33.805038   78077 retry.go:31] will retry after 329.640925ms: waiting for machine to come up
	I0717 18:31:34.136730   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:34.137459   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:34.137485   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:34.137398   78077 retry.go:31] will retry after 474.208397ms: waiting for machine to come up
	I0717 18:31:34.613070   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:34.613555   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:34.613589   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:34.613507   78077 retry.go:31] will retry after 480.946138ms: waiting for machine to come up
	I0717 18:31:35.096126   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:35.096758   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:35.096787   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:35.096706   78077 retry.go:31] will retry after 619.792149ms: waiting for machine to come up
	I0717 18:31:35.718511   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:35.719154   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:35.719183   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:35.719105   78077 retry.go:31] will retry after 617.83695ms: waiting for machine to come up
	I0717 18:31:36.339089   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:36.339551   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:36.339577   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:36.339504   78077 retry.go:31] will retry after 1.119290876s: waiting for machine to come up
	I0717 18:31:37.460583   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:37.461228   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:37.461256   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:37.461178   78077 retry.go:31] will retry after 1.078022658s: waiting for machine to come up
	I0717 18:31:34.764584   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0717 18:31:34.764627   76391 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:31:34.764677   76391 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 18:31:34.764723   76391 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:31:34.764767   76391 ssh_runner.go:195] Run: which crictl
	I0717 18:31:34.764684   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:31:37.440119   76391 ssh_runner.go:235] Completed: which crictl: (2.675324301s)
	I0717 18:31:37.440199   76391 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:31:37.440212   76391 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.675403717s)
	I0717 18:31:37.440234   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 18:31:37.440263   76391 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:31:37.440332   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:31:36.454130   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:38.454403   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:38.540880   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:38.541390   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:38.541413   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:38.541299   78077 retry.go:31] will retry after 1.425823371s: waiting for machine to come up
	I0717 18:31:39.968956   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:39.969608   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:39.969654   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:39.969555   78077 retry.go:31] will retry after 2.03401538s: waiting for machine to come up
	I0717 18:31:42.005548   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:42.006145   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:42.006186   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:42.006097   78077 retry.go:31] will retry after 2.798937612s: waiting for machine to come up
	I0717 18:31:39.409448   76391 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.969219545s)
	I0717 18:31:39.409478   76391 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.96912201s)
	I0717 18:31:39.409502   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 18:31:39.409529   76391 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:31:39.409583   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:31:39.409503   76391 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 18:31:39.409686   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:31:41.372476   76391 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.962762593s)
	I0717 18:31:41.372520   76391 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0717 18:31:41.372535   76391 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.962924114s)
	I0717 18:31:41.372549   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 18:31:41.372548   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0717 18:31:41.372584   76391 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:31:41.372659   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:31:43.269851   76391 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.89716244s)
	I0717 18:31:43.269883   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 18:31:43.269910   76391 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:31:43.269986   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:31:40.955183   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:43.451812   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:45.452884   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:44.808105   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:44.808594   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:44.808616   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:44.808574   78077 retry.go:31] will retry after 2.417317368s: waiting for machine to come up
	I0717 18:31:47.227937   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:47.228407   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:31:47.228427   77994 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:31:47.228378   78077 retry.go:31] will retry after 4.217313619s: waiting for machine to come up
	I0717 18:31:45.241544   76391 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.971531191s)
	I0717 18:31:45.241572   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 18:31:45.241608   76391 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:31:45.241673   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:31:48.409933   76391 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.168231143s)
	I0717 18:31:48.409964   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 18:31:48.410000   76391 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:31:48.410071   76391 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:31:49.066543   76391 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 18:31:49.066589   76391 cache_images.go:123] Successfully loaded all cached images
	I0717 18:31:49.066601   76391 cache_images.go:92] duration metric: took 15.772574999s to LoadCachedImages
	I0717 18:31:49.066615   76391 kubeadm.go:934] updating node { 192.168.72.216 8443 v1.31.0-beta.0 crio true true} ...
	I0717 18:31:49.066740   76391 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-066175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:31:49.066801   76391 ssh_runner.go:195] Run: crio config
	I0717 18:31:49.114337   76391 cni.go:84] Creating CNI manager for ""
	I0717 18:31:49.114361   76391 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:31:49.114374   76391 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:31:49.114409   76391 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.216 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-066175 NodeName:no-preload-066175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:31:49.114568   76391 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-066175"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:31:49.114642   76391 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 18:31:49.124651   76391 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0-beta.0': No such file or directory
	
	Initiating transfer...
	I0717 18:31:49.124706   76391 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 18:31:49.133972   76391 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256
	I0717 18:31:49.134057   76391 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubelet
	I0717 18:31:49.134101   76391 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubeadm
	I0717 18:31:49.134065   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl
	I0717 18:31:49.138829   76391 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0-beta.0/kubectl': No such file or directory
	I0717 18:31:49.138853   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl (56209560 bytes)
	I0717 18:31:47.951981   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:49.953069   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:51.450034   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.450725   77994 main.go:141] libmachine: (embed-certs-527415) Found IP for machine: 192.168.61.90
	I0717 18:31:51.450755   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has current primary IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.450761   77994 main.go:141] libmachine: (embed-certs-527415) Reserving static IP address...
	I0717 18:31:51.451197   77994 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"} in network mk-embed-certs-527415
	I0717 18:31:51.523934   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Getting to WaitForSSH function...
	I0717 18:31:51.523969   77994 main.go:141] libmachine: (embed-certs-527415) Reserved static IP address: 192.168.61.90
	I0717 18:31:51.524009   77994 main.go:141] libmachine: (embed-certs-527415) Waiting for SSH to be available...
	I0717 18:31:51.526885   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.527351   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:minikube Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:51.527381   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.527540   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH client type: external
	I0717 18:31:51.527564   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa (-rw-------)
	I0717 18:31:51.527598   77994 main.go:141] libmachine: (embed-certs-527415) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:31:51.527612   77994 main.go:141] libmachine: (embed-certs-527415) DBG | About to run SSH command:
	I0717 18:31:51.527625   77994 main.go:141] libmachine: (embed-certs-527415) DBG | exit 0
	I0717 18:31:51.656746   77994 main.go:141] libmachine: (embed-certs-527415) DBG | SSH cmd err, output: <nil>: 
	I0717 18:31:51.657034   77994 main.go:141] libmachine: (embed-certs-527415) KVM machine creation complete!
	I0717 18:31:51.657367   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetConfigRaw
	I0717 18:31:51.657882   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:51.658124   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:51.658283   77994 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:31:51.658300   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:31:51.659706   77994 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:31:51.659722   77994 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:31:51.659729   77994 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:31:51.659738   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:51.661978   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.662282   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:51.662309   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.662414   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:51.662596   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:51.662734   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:51.662877   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:51.663040   77994 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:51.663259   77994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:31:51.663270   77994 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:31:51.775852   77994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:31:51.775881   77994 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:31:51.775892   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:51.778538   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.778987   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:51.779011   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.779222   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:51.779428   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:51.779657   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:51.779808   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:51.779974   77994 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:51.780153   77994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:31:51.780166   77994 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:31:51.889084   77994 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:31:51.889175   77994 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:31:51.889191   77994 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:31:51.889201   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:31:51.889456   77994 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:31:51.889478   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:31:51.889696   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:51.892515   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.892901   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:51.892927   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:51.893105   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:51.893297   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:51.893473   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:51.893595   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:51.893738   77994 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:51.893915   77994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:31:51.893931   77994 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-527415 && echo "embed-certs-527415" | sudo tee /etc/hostname
	I0717 18:31:52.019955   77994 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-527415
	
	I0717 18:31:52.019982   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.023120   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.023422   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.023448   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.023633   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.023934   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.024106   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.024247   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.024397   77994 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:52.024570   77994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:31:52.024592   77994 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-527415' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-527415/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-527415' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:31:52.141225   77994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:31:52.141255   77994 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:31:52.141306   77994 buildroot.go:174] setting up certificates
	I0717 18:31:52.141330   77994 provision.go:84] configureAuth start
	I0717 18:31:52.141347   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:31:52.141628   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:31:52.144442   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.144763   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.144791   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.144935   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.147182   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.147550   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.147589   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.147748   77994 provision.go:143] copyHostCerts
	I0717 18:31:52.147806   77994 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:31:52.147817   77994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:31:52.147866   77994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:31:52.147955   77994 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:31:52.147963   77994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:31:52.147984   77994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:31:52.148057   77994 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:31:52.148064   77994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:31:52.148086   77994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:31:52.148141   77994 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.embed-certs-527415 san=[127.0.0.1 192.168.61.90 embed-certs-527415 localhost minikube]
	I0717 18:31:52.252587   77994 provision.go:177] copyRemoteCerts
	I0717 18:31:52.252660   77994 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:31:52.252689   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.255106   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.255484   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.255518   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.255761   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.255952   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.256129   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.256298   77994 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:31:52.342533   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:31:52.367027   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:31:52.390985   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 18:31:52.412089   77994 provision.go:87] duration metric: took 270.743656ms to configureAuth
	I0717 18:31:52.412129   77994 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:31:52.412308   77994 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:31:52.412412   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.415290   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.415645   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.415671   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.415836   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.416018   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.416176   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.416294   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.416496   77994 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:52.416689   77994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:31:52.416707   77994 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:31:50.157551   76391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:31:50.172158   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0-beta.0/kubelet
	I0717 18:31:50.176457   76391 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0-beta.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet': No such file or directory
	I0717 18:31:50.176496   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.31.0-beta.0/kubelet (76643576 bytes)
	I0717 18:31:53.717739   76391 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0-beta.0/kubeadm
	I0717 18:31:53.722817   76391 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0-beta.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0-beta.0/kubeadm': No such file or directory
	I0717 18:31:53.722860   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0-beta.0/kubeadm (58110104 bytes)
	I0717 18:31:53.964050   76391 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:31:53.975154   76391 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 18:31:53.992873   76391 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 18:31:54.015018   76391 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 18:31:54.035446   76391 ssh_runner.go:195] Run: grep 192.168.72.216	control-plane.minikube.internal$ /etc/hosts
	I0717 18:31:54.039709   76391 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:31:54.052721   76391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:31:54.167697   76391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:31:54.183483   76391 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175 for IP: 192.168.72.216
	I0717 18:31:54.183504   76391 certs.go:194] generating shared ca certs ...
	I0717 18:31:54.183519   76391 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.183653   76391 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:31:54.183717   76391 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:31:54.183731   76391 certs.go:256] generating profile certs ...
	I0717 18:31:54.183795   76391 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.key
	I0717 18:31:54.183811   76391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.crt with IP's: []
	I0717 18:31:52.673263   77994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:31:52.673302   77994 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:31:52.673314   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetURL
	I0717 18:31:52.674791   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Using libvirt version 6000000
	I0717 18:31:52.677282   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.677737   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.677764   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.677878   77994 main.go:141] libmachine: Docker is up and running!
	I0717 18:31:52.677899   77994 main.go:141] libmachine: Reticulating splines...
	I0717 18:31:52.677908   77994 client.go:171] duration metric: took 20.870538459s to LocalClient.Create
	I0717 18:31:52.677943   77994 start.go:167] duration metric: took 20.870616s to libmachine.API.Create "embed-certs-527415"
	I0717 18:31:52.677956   77994 start.go:293] postStartSetup for "embed-certs-527415" (driver="kvm2")
	I0717 18:31:52.677974   77994 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:31:52.677991   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:52.678242   77994 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:31:52.678266   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.680248   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.680563   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.680597   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.680714   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.680879   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.681101   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.681232   77994 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:31:52.766289   77994 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:31:52.770069   77994 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:31:52.770086   77994 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:31:52.770146   77994 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:31:52.770223   77994 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:31:52.770321   77994 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:31:52.779112   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:31:52.801280   77994 start.go:296] duration metric: took 123.306555ms for postStartSetup
	I0717 18:31:52.801328   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetConfigRaw
	I0717 18:31:52.801941   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:31:52.804815   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.805160   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.805188   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.805412   77994 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/config.json ...
	I0717 18:31:52.805589   77994 start.go:128] duration metric: took 21.019966577s to createHost
	I0717 18:31:52.805616   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.807940   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.808405   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.808432   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.808545   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.808721   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.808882   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.809047   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.809195   77994 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:52.809362   77994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:31:52.809375   77994 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:31:52.921449   77994 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241112.893868317
	
	I0717 18:31:52.921468   77994 fix.go:216] guest clock: 1721241112.893868317
	I0717 18:31:52.921474   77994 fix.go:229] Guest: 2024-07-17 18:31:52.893868317 +0000 UTC Remote: 2024-07-17 18:31:52.805601992 +0000 UTC m=+30.199766249 (delta=88.266325ms)
	I0717 18:31:52.921494   77994 fix.go:200] guest clock delta is within tolerance: 88.266325ms
	I0717 18:31:52.921499   77994 start.go:83] releasing machines lock for "embed-certs-527415", held for 21.136037487s
	I0717 18:31:52.921517   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:52.921781   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:31:52.925132   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.925493   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.925519   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.925686   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:52.926244   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:52.926419   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:31:52.926533   77994 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:31:52.926579   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.926656   77994 ssh_runner.go:195] Run: cat /version.json
	I0717 18:31:52.926681   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:31:52.929807   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.929970   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.930168   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.930193   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.930365   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.930444   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:52.930471   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:52.930528   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.930685   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:31:52.930709   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.930840   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:31:52.930843   77994 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:31:52.931018   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:31:52.931154   77994 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:31:53.018875   77994 ssh_runner.go:195] Run: systemctl --version
	I0717 18:31:53.073618   77994 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:31:53.233683   77994 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:31:53.239402   77994 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:31:53.239458   77994 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:31:53.254745   77994 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:31:53.254768   77994 start.go:495] detecting cgroup driver to use...
	I0717 18:31:53.254852   77994 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:31:53.272129   77994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:31:53.284751   77994 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:31:53.284817   77994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:31:53.297287   77994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:31:53.310096   77994 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:31:53.418973   77994 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:31:53.569347   77994 docker.go:233] disabling docker service ...
	I0717 18:31:53.569424   77994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:31:53.584075   77994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:31:53.597553   77994 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:31:53.731390   77994 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:31:53.876960   77994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:31:53.895684   77994 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:31:53.921498   77994 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:31:53.921594   77994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:53.936665   77994 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:31:53.936739   77994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:53.949134   77994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:53.963753   77994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:53.975742   77994 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:31:53.987864   77994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:53.999149   77994 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:54.015311   77994 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:54.026099   77994 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:31:54.038188   77994 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:31:54.038239   77994 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:31:54.051132   77994 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:31:54.060875   77994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:31:54.178755   77994 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:31:54.580916   77994 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:31:54.581013   77994 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:31:54.585301   77994 start.go:563] Will wait 60s for crictl version
	I0717 18:31:54.585380   77994 ssh_runner.go:195] Run: which crictl
	I0717 18:31:54.588602   77994 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:31:54.625278   77994 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:31:54.625383   77994 ssh_runner.go:195] Run: crio --version
	I0717 18:31:54.660653   77994 ssh_runner.go:195] Run: crio --version
	I0717 18:31:54.696465   77994 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:31:54.268690   76391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.crt ...
	I0717 18:31:54.268717   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.crt: {Name:mkfc9a3fc73901f167d875c68badb009bba3473b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.268871   76391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.key ...
	I0717 18:31:54.268881   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.key: {Name:mka80e83b4f4aa4e9c199cede9b7f4aabb9280fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.268980   76391 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672
	I0717 18:31:54.268996   76391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt.78182672 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.216]
	I0717 18:31:54.434876   76391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt.78182672 ...
	I0717 18:31:54.434912   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt.78182672: {Name:mkc2c17201e99e2c605fdbca03d523d337a6eca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.435102   76391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672 ...
	I0717 18:31:54.435121   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672: {Name:mka7c3ef9777ecc269f3e41d6f06196449dd9e81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.435229   76391 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt.78182672 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt
	I0717 18:31:54.435328   76391 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key
	I0717 18:31:54.435385   76391 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key
	I0717 18:31:54.435401   76391 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt with IP's: []
	I0717 18:31:54.616605   76391 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt ...
	I0717 18:31:54.616631   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt: {Name:mkaf0bc2dc76758834e2d1fce1784f41f5568c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.616791   76391 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key ...
	I0717 18:31:54.616806   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key: {Name:mkee57f65eb7326dd47875723dc35812e3877809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:54.616991   76391 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:31:54.617023   76391 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:31:54.617030   76391 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:31:54.617051   76391 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:31:54.617073   76391 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:31:54.617101   76391 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:31:54.617144   76391 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:31:54.617791   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:31:54.648238   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:31:54.676253   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:31:54.702785   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:31:54.725238   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:31:54.748069   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:31:54.777237   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:31:54.800606   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:31:54.824913   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:31:54.847780   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:31:54.873257   76391 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:31:54.907359   76391 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:31:54.932656   76391 ssh_runner.go:195] Run: openssl version
	I0717 18:31:54.940667   76391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:31:54.955926   76391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:31:54.960974   76391 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:31:54.961033   76391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:31:54.968406   76391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:31:54.982484   76391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:31:54.996890   76391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:31:55.004745   76391 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:31:55.004813   76391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:31:55.012014   76391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:31:55.025057   76391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:31:55.038976   76391 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:55.045874   76391 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:55.045938   76391 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:55.053668   76391 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:31:55.068421   76391 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:31:55.072888   76391 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:31:55.072960   76391 kubeadm.go:392] StartCluster: {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:31:55.073055   76391 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:31:55.073111   76391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:31:55.123582   76391 cri.go:89] found id: ""
	I0717 18:31:55.123695   76391 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:31:55.138646   76391 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:31:55.151104   76391 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:31:55.162351   76391 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:31:55.162375   76391 kubeadm.go:157] found existing configuration files:
	
	I0717 18:31:55.162428   76391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:31:55.173765   76391 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:31:55.173827   76391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:31:55.189405   76391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:31:55.204438   76391 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:31:55.204513   76391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:31:55.216112   76391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:31:55.229982   76391 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:31:55.230033   76391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:31:55.243597   76391 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:31:55.256553   76391 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:31:55.256625   76391 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:31:55.269573   76391 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:31:55.331158   76391 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 18:31:55.331556   76391 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:31:55.445321   76391 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:31:55.445462   76391 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:31:55.445606   76391 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 18:31:55.468599   76391 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:31:52.454284   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:54.954746   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:54.697918   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:31:54.700782   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:54.701202   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:31:54.701231   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:31:54.701409   77994 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 18:31:54.705863   77994 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:31:54.718108   77994 kubeadm.go:883] updating cluster {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:31:54.718282   77994 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:31:54.718362   77994 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:31:54.751153   77994 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:31:54.751227   77994 ssh_runner.go:195] Run: which lz4
	I0717 18:31:54.756244   77994 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:31:54.761463   77994 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:31:54.761488   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:31:56.014749   77994 crio.go:462] duration metric: took 1.258525232s to copy over tarball
	I0717 18:31:56.014875   77994 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:31:55.470772   76391 out.go:204]   - Generating certificates and keys ...
	I0717 18:31:55.470883   76391 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:31:55.470985   76391 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:31:55.590001   76391 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:31:55.820801   76391 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:31:55.938963   76391 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:31:56.112630   76391 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 18:31:56.239675   76391 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 18:31:56.239814   76391 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-066175] and IPs [192.168.72.216 127.0.0.1 ::1]
	I0717 18:31:56.375120   76391 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 18:31:56.375506   76391 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-066175] and IPs [192.168.72.216 127.0.0.1 ::1]
	I0717 18:31:56.600019   76391 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:31:56.718280   76391 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:31:56.913309   76391 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 18:31:56.913402   76391 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:31:57.020178   76391 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:31:57.131272   76391 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:31:57.736863   76391 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:31:57.958126   76391 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:31:58.047292   76391 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:31:58.048051   76391 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:31:58.051183   76391 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:31:58.053328   76391 out.go:204]   - Booting up control plane ...
	I0717 18:31:58.053461   76391 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:31:58.053565   76391 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:31:58.053672   76391 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:31:58.075519   76391 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:31:58.084553   76391 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:31:58.084634   76391 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:31:58.235800   76391 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:31:58.235921   76391 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:31:58.741075   76391 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.409445ms
	I0717 18:31:58.741227   76391 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:31:58.120843   77994 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.105923139s)
	I0717 18:31:58.120866   77994 crio.go:469] duration metric: took 2.106083712s to extract the tarball
	I0717 18:31:58.120873   77994 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:31:58.156367   77994 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:31:58.200921   77994 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:31:58.200955   77994 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:31:58.200965   77994 kubeadm.go:934] updating node { 192.168.61.90 8443 v1.30.2 crio true true} ...
	I0717 18:31:58.201090   77994 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-527415 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:31:58.201163   77994 ssh_runner.go:195] Run: crio config
	I0717 18:31:58.252221   77994 cni.go:84] Creating CNI manager for ""
	I0717 18:31:58.252243   77994 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:31:58.252258   77994 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:31:58.252277   77994 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.90 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-527415 NodeName:embed-certs-527415 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:31:58.252415   77994 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-527415"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:31:58.252475   77994 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:31:58.264998   77994 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:31:58.265066   77994 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:31:58.275284   77994 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0717 18:31:58.292501   77994 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:31:58.308586   77994 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0717 18:31:58.324035   77994 ssh_runner.go:195] Run: grep 192.168.61.90	control-plane.minikube.internal$ /etc/hosts
	I0717 18:31:58.327675   77994 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:31:58.340285   77994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:31:58.455213   77994 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:31:58.471042   77994 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415 for IP: 192.168.61.90
	I0717 18:31:58.471067   77994 certs.go:194] generating shared ca certs ...
	I0717 18:31:58.471097   77994 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.471320   77994 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:31:58.471399   77994 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:31:58.471415   77994 certs.go:256] generating profile certs ...
	I0717 18:31:58.471508   77994 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.key
	I0717 18:31:58.471529   77994 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.crt with IP's: []
	I0717 18:31:58.693854   77994 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.crt ...
	I0717 18:31:58.693888   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.crt: {Name:mka8c970e93bdd8111ff40dffa7f77a2c03e5f9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.694083   77994 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.key ...
	I0717 18:31:58.694097   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.key: {Name:mk4459e338073cbe85f92b5e828eb8dad95c724a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.694196   77994 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9
	I0717 18:31:58.694211   77994 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt.f26848e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.90]
	I0717 18:31:58.773256   77994 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt.f26848e9 ...
	I0717 18:31:58.773282   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt.f26848e9: {Name:mkdd3636f13c8ab881f83fc1d3b87dc73c54b436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.773453   77994 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9 ...
	I0717 18:31:58.773469   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9: {Name:mk452c939818aa8ab2959db3b8f6f150d79a61c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.773562   77994 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt.f26848e9 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt
	I0717 18:31:58.773652   77994 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key
	I0717 18:31:58.773708   77994 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key
	I0717 18:31:58.773722   77994 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt with IP's: []
	I0717 18:31:58.991104   77994 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt ...
	I0717 18:31:58.991132   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt: {Name:mk0cd91bc7679c284d1182d4f6ff5007e1d42583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.991292   77994 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key ...
	I0717 18:31:58.991304   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key: {Name:mk71c78a469bc4e8a4c94b29ca757ac1bc46349d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:58.991457   77994 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:31:58.991495   77994 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:31:58.991504   77994 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:31:58.991526   77994 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:31:58.991546   77994 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:31:58.991566   77994 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:31:58.991606   77994 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:31:58.992203   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:31:59.020109   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:31:59.045102   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:31:59.066401   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:31:59.088628   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 18:31:59.111918   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:31:59.133766   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:31:59.157153   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:31:59.186329   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:31:59.208929   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:31:59.242074   77994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:31:59.277509   77994 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:31:59.298944   77994 ssh_runner.go:195] Run: openssl version
	I0717 18:31:59.305473   77994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:31:59.318247   77994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:59.325663   77994 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:59.325758   77994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:59.333143   77994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:31:59.347546   77994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:31:59.361626   77994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:31:59.366207   77994 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:31:59.366272   77994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:31:59.371771   77994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:31:59.382330   77994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:31:59.393255   77994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:31:59.400958   77994 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:31:59.401022   77994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:31:59.408425   77994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:31:59.422321   77994 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:31:59.426531   77994 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:31:59.426588   77994 kubeadm.go:392] StartCluster: {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:31:59.426707   77994 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:31:59.426777   77994 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:31:59.464919   77994 cri.go:89] found id: ""
	I0717 18:31:59.465008   77994 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:31:59.474303   77994 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:31:59.483286   77994 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:31:59.492360   77994 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:31:59.492382   77994 kubeadm.go:157] found existing configuration files:
	
	I0717 18:31:59.492433   77994 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:31:59.503928   77994 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:31:59.504000   77994 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:31:59.513822   77994 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:31:59.523256   77994 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:31:59.523322   77994 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:31:59.531799   77994 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:31:59.548122   77994 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:31:59.548180   77994 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:31:59.563272   77994 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:31:59.572332   77994 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:31:59.572394   77994 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:31:59.583016   77994 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:31:59.701044   77994 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:31:59.701101   77994 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:31:59.834726   77994 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:31:59.834877   77994 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:31:59.835005   77994 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:32:00.030478   77994 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:31:57.453157   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:31:59.454636   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:00.286575   77994 out.go:204]   - Generating certificates and keys ...
	I0717 18:32:00.286711   77994 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:32:00.286805   77994 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:32:00.286902   77994 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:32:00.397498   77994 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:32:00.830524   77994 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:32:01.000442   77994 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 18:32:01.064799   77994 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 18:32:01.065081   77994 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-527415 localhost] and IPs [192.168.61.90 127.0.0.1 ::1]
	I0717 18:32:01.322578   77994 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 18:32:01.322847   77994 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-527415 localhost] and IPs [192.168.61.90 127.0.0.1 ::1]
	I0717 18:32:01.554100   77994 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:32:01.689208   77994 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:32:02.015293   77994 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 18:32:02.015525   77994 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:32:02.124199   77994 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:32:02.176757   77994 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:32:02.573586   77994 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:32:02.897023   77994 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:32:03.051541   77994 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:32:03.052453   77994 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:32:03.055262   77994 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:32:05.743834   76391 kubeadm.go:310] [api-check] The API server is healthy after 7.002303996s
	I0717 18:32:05.760530   76391 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:32:05.778549   76391 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:32:05.817434   76391 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:32:05.817724   76391 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-066175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:32:05.831095   76391 kubeadm.go:310] [bootstrap-token] Using token: 2lj338.n7y99vmpdx4rwfva
	I0717 18:32:01.952471   64770 pod_ready.go:102] pod "kube-proxy-8jf5p" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:02.946617   64770 pod_ready.go:81] duration metric: took 4m0.000703328s for pod "kube-proxy-8jf5p" in "kube-system" namespace to be "Ready" ...
	E0717 18:32:02.946667   64770 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "kube-proxy-8jf5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:32:02.946688   64770 pod_ready.go:38] duration metric: took 4m13.537210596s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:02.946713   64770 kubeadm.go:597] duration metric: took 4m41.544315272s to restartPrimaryControlPlane
	W0717 18:32:02.946772   64770 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:32:02.946807   64770 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:32:05.832316   76391 out.go:204]   - Configuring RBAC rules ...
	I0717 18:32:05.832468   76391 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:32:05.839739   76391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:32:05.848276   76391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:32:05.852243   76391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:32:05.859383   76391 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:32:05.863387   76391 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:32:06.157376   76391 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:32:07.408059   76391 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:32:07.461385   76391 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:32:07.462841   76391 kubeadm.go:310] 
	I0717 18:32:07.462935   76391 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:32:07.462947   76391 kubeadm.go:310] 
	I0717 18:32:07.463042   76391 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:32:07.463055   76391 kubeadm.go:310] 
	I0717 18:32:07.463082   76391 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:32:07.463150   76391 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:32:07.463218   76391 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:32:07.463233   76391 kubeadm.go:310] 
	I0717 18:32:07.463301   76391 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:32:07.463309   76391 kubeadm.go:310] 
	I0717 18:32:07.463370   76391 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:32:07.463380   76391 kubeadm.go:310] 
	I0717 18:32:07.463454   76391 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:32:07.463554   76391 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:32:07.463650   76391 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:32:07.463659   76391 kubeadm.go:310] 
	I0717 18:32:07.463761   76391 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:32:07.463857   76391 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:32:07.463867   76391 kubeadm.go:310] 
	I0717 18:32:07.463974   76391 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2lj338.n7y99vmpdx4rwfva \
	I0717 18:32:07.464106   76391 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:32:07.464136   76391 kubeadm.go:310] 	--control-plane 
	I0717 18:32:07.464145   76391 kubeadm.go:310] 
	I0717 18:32:07.464245   76391 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:32:07.464257   76391 kubeadm.go:310] 
	I0717 18:32:07.464372   76391 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2lj338.n7y99vmpdx4rwfva \
	I0717 18:32:07.464503   76391 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:32:07.465356   76391 kubeadm.go:310] W0717 18:31:55.323806    1276 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:32:07.465692   76391 kubeadm.go:310] W0717 18:31:55.325194    1276 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:32:07.465822   76391 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:32:07.465849   76391 cni.go:84] Creating CNI manager for ""
	I0717 18:32:07.465859   76391 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:32:07.467568   76391 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:32:03.057151   77994 out.go:204]   - Booting up control plane ...
	I0717 18:32:03.057270   77994 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:32:03.057371   77994 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:32:03.057429   77994 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:32:03.076263   77994 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:32:03.077291   77994 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:32:03.077384   77994 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:32:03.214187   77994 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:32:03.214308   77994 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:32:04.215325   77994 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002173836s
	I0717 18:32:04.215473   77994 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:32:07.468992   76391 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:32:07.483826   76391 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:32:07.502648   76391 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:32:07.502804   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:07.502893   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-066175 minikube.k8s.io/updated_at=2024_07_17T18_32_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=no-preload-066175 minikube.k8s.io/primary=true
	I0717 18:32:07.559426   76391 ops.go:34] apiserver oom_adj: -16
	I0717 18:32:07.721988   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:08.222446   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:08.722013   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:09.222076   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:09.214689   77994 kubeadm.go:310] [api-check] The API server is healthy after 5.002534955s
	I0717 18:32:09.230696   77994 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:32:09.252928   77994 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:32:09.284112   77994 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:32:09.284388   77994 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-527415 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:32:09.297006   77994 kubeadm.go:310] [bootstrap-token] Using token: a3ak5v.cv98bs6avaxmk4mp
	I0717 18:32:09.298461   77994 out.go:204]   - Configuring RBAC rules ...
	I0717 18:32:09.298606   77994 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:32:09.308006   77994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:32:09.315914   77994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:32:09.319324   77994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:32:09.322805   77994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:32:09.326217   77994 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:32:09.622993   77994 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:32:10.055436   77994 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:32:10.622037   77994 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:32:10.622078   77994 kubeadm.go:310] 
	I0717 18:32:10.622176   77994 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:32:10.622206   77994 kubeadm.go:310] 
	I0717 18:32:10.622314   77994 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:32:10.622342   77994 kubeadm.go:310] 
	I0717 18:32:10.622386   77994 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:32:10.622460   77994 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:32:10.622557   77994 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:32:10.622571   77994 kubeadm.go:310] 
	I0717 18:32:10.622671   77994 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:32:10.622682   77994 kubeadm.go:310] 
	I0717 18:32:10.622757   77994 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:32:10.622767   77994 kubeadm.go:310] 
	I0717 18:32:10.622837   77994 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:32:10.622946   77994 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:32:10.623047   77994 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:32:10.623057   77994 kubeadm.go:310] 
	I0717 18:32:10.623149   77994 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:32:10.623249   77994 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:32:10.623262   77994 kubeadm.go:310] 
	I0717 18:32:10.623377   77994 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a3ak5v.cv98bs6avaxmk4mp \
	I0717 18:32:10.623513   77994 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:32:10.623549   77994 kubeadm.go:310] 	--control-plane 
	I0717 18:32:10.623558   77994 kubeadm.go:310] 
	I0717 18:32:10.623668   77994 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:32:10.623679   77994 kubeadm.go:310] 
	I0717 18:32:10.623784   77994 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a3ak5v.cv98bs6avaxmk4mp \
	I0717 18:32:10.623913   77994 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:32:10.624051   77994 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:32:10.624074   77994 cni.go:84] Creating CNI manager for ""
	I0717 18:32:10.624087   77994 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:32:10.625793   77994 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:32:09.722118   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:10.222422   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:10.722519   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:11.222021   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:11.722103   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:12.222243   76391 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:12.299594   76391 kubeadm.go:1113] duration metric: took 4.796842133s to wait for elevateKubeSystemPrivileges
	I0717 18:32:12.299625   76391 kubeadm.go:394] duration metric: took 17.226686695s to StartCluster
	I0717 18:32:12.299643   76391 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:12.299710   76391 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:32:12.300525   76391 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:12.300734   76391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 18:32:12.300742   76391 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:32:12.300799   76391 addons.go:69] Setting storage-provisioner=true in profile "no-preload-066175"
	I0717 18:32:12.300817   76391 addons.go:69] Setting default-storageclass=true in profile "no-preload-066175"
	I0717 18:32:12.300836   76391 addons.go:234] Setting addon storage-provisioner=true in "no-preload-066175"
	I0717 18:32:12.300845   76391 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-066175"
	I0717 18:32:12.300727   76391 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:32:12.300864   76391 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:32:12.300930   76391 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:32:12.301301   76391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:12.301308   76391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:12.301337   76391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:12.301349   76391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:12.303700   76391 out.go:177] * Verifying Kubernetes components...
	I0717 18:32:12.305055   76391 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:32:12.316928   76391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I0717 18:32:12.316965   76391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41587
	I0717 18:32:12.317342   76391 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:12.317395   76391 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:12.317841   76391 main.go:141] libmachine: Using API Version  1
	I0717 18:32:12.317861   76391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:12.318009   76391 main.go:141] libmachine: Using API Version  1
	I0717 18:32:12.318035   76391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:12.318198   76391 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:12.318399   76391 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:12.318440   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:32:12.318952   76391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:12.318983   76391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:12.322050   76391 addons.go:234] Setting addon default-storageclass=true in "no-preload-066175"
	I0717 18:32:12.322094   76391 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:32:12.322489   76391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:12.322520   76391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:12.336191   76391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46215
	I0717 18:32:12.336721   76391 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:12.337242   76391 main.go:141] libmachine: Using API Version  1
	I0717 18:32:12.337266   76391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:12.337638   76391 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:12.337829   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:32:12.338963   76391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I0717 18:32:12.339440   76391 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:12.339824   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:32:12.340020   76391 main.go:141] libmachine: Using API Version  1
	I0717 18:32:12.340045   76391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:12.340375   76391 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:12.340926   76391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:12.340994   76391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:12.341956   76391 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:32:10.627191   77994 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:32:10.640013   77994 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:32:10.658487   77994 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:32:10.658556   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:10.658562   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-527415 minikube.k8s.io/updated_at=2024_07_17T18_32_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=embed-certs-527415 minikube.k8s.io/primary=true
	I0717 18:32:10.866189   77994 ops.go:34] apiserver oom_adj: -16
	I0717 18:32:10.866330   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:11.366429   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:11.867195   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:12.367254   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:12.343751   76391 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:32:12.343771   76391 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:32:12.343790   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:32:12.347332   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:32:12.347869   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:32:12.347895   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:32:12.348072   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:32:12.348259   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:32:12.348469   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:32:12.348622   76391 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:32:12.357745   76391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
	I0717 18:32:12.358161   76391 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:12.358642   76391 main.go:141] libmachine: Using API Version  1
	I0717 18:32:12.358655   76391 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:12.359015   76391 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:12.359205   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:32:12.360820   76391 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:32:12.361030   76391 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:32:12.361043   76391 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:32:12.361062   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:32:12.363864   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:32:12.364271   76391 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:32:12.364291   76391 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:32:12.364480   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:32:12.364644   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:32:12.364777   76391 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:32:12.364902   76391 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:32:12.447183   76391 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 18:32:12.489400   76391 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:32:12.603372   76391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:32:12.617104   76391 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:32:12.789894   76391 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0717 18:32:12.790793   76391 node_ready.go:35] waiting up to 6m0s for node "no-preload-066175" to be "Ready" ...
	I0717 18:32:12.804020   76391 node_ready.go:49] node "no-preload-066175" has status "Ready":"True"
	I0717 18:32:12.804042   76391 node_ready.go:38] duration metric: took 13.208161ms for node "no-preload-066175" to be "Ready" ...
	I0717 18:32:12.804053   76391 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:12.816264   76391 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-qb7wm" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:12.969841   76391 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:12.969868   76391 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:32:12.970124   76391 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:12.970143   76391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:12.970154   76391 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:12.970165   76391 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:32:12.970422   76391 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:12.970439   76391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:12.980050   76391 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:12.980070   76391 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:32:12.980320   76391 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:12.980337   76391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:13.138708   76391 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:13.138735   76391 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:32:13.139056   76391 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:32:13.139086   76391 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:13.139100   76391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:13.139123   76391 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:13.139135   76391 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:32:13.139369   76391 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:13.139386   76391 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:32:13.139388   76391 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:13.141708   76391 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0717 18:32:13.143046   76391 addons.go:510] duration metric: took 842.300638ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0717 18:32:13.294235   76391 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-066175" context rescaled to 1 replicas
	I0717 18:32:13.319940   76391 pod_ready.go:97] error getting pod "coredns-5cfdc65f69-qb7wm" in "kube-system" namespace (skipping!): pods "coredns-5cfdc65f69-qb7wm" not found
	I0717 18:32:13.319964   76391 pod_ready.go:81] duration metric: took 503.676164ms for pod "coredns-5cfdc65f69-qb7wm" in "kube-system" namespace to be "Ready" ...
	E0717 18:32:13.319972   76391 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5cfdc65f69-qb7wm" in "kube-system" namespace (skipping!): pods "coredns-5cfdc65f69-qb7wm" not found
	I0717 18:32:13.319979   76391 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:12.867153   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:13.366677   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:13.867151   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:14.367386   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:14.866672   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:15.366599   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:15.866972   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:16.366534   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:16.867423   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:17.366409   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:15.326751   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:17.327034   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:17.866993   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:18.366558   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:18.867336   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:19.366437   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:19.867145   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:20.366941   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:20.866366   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:21.366979   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:21.866895   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:22.366419   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:22.866835   77994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:23.049357   77994 kubeadm.go:1113] duration metric: took 12.390858013s to wait for elevateKubeSystemPrivileges
	I0717 18:32:23.049391   77994 kubeadm.go:394] duration metric: took 23.6228077s to StartCluster
	I0717 18:32:23.049412   77994 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:23.049500   77994 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:32:23.051540   77994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:23.051799   77994 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 18:32:23.051806   77994 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:32:23.051902   77994 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:32:23.051986   77994 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-527415"
	I0717 18:32:23.052005   77994 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:32:23.052019   77994 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-527415"
	I0717 18:32:23.052018   77994 addons.go:69] Setting default-storageclass=true in profile "embed-certs-527415"
	I0717 18:32:23.052047   77994 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-527415"
	I0717 18:32:23.052069   77994 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:32:23.052493   77994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:23.052518   77994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:23.052576   77994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:23.052623   77994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:23.053376   77994 out.go:177] * Verifying Kubernetes components...
	I0717 18:32:23.054586   77994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:32:23.067519   77994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45117
	I0717 18:32:23.067519   77994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45535
	I0717 18:32:23.068056   77994 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:23.068101   77994 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:23.068603   77994 main.go:141] libmachine: Using API Version  1
	I0717 18:32:23.068622   77994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:23.068784   77994 main.go:141] libmachine: Using API Version  1
	I0717 18:32:23.068815   77994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:23.068929   77994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:23.069117   77994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:23.069427   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:32:23.069550   77994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:23.069592   77994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:23.072592   77994 addons.go:234] Setting addon default-storageclass=true in "embed-certs-527415"
	I0717 18:32:23.072643   77994 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:32:23.072922   77994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:23.072980   77994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:23.084859   77994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44261
	I0717 18:32:23.085308   77994 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:23.085836   77994 main.go:141] libmachine: Using API Version  1
	I0717 18:32:23.085860   77994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:23.086210   77994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:23.086424   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:32:23.087266   77994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37363
	I0717 18:32:23.087613   77994 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:23.088096   77994 main.go:141] libmachine: Using API Version  1
	I0717 18:32:23.088118   77994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:23.088433   77994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:23.088539   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:32:23.088953   77994 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:23.088986   77994 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:23.091021   77994 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:32:19.327651   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:21.826194   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:23.827863   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:23.092654   77994 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:32:23.092675   77994 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:32:23.092692   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:32:23.095593   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:32:23.096061   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:32:23.096110   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:32:23.096300   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:32:23.096499   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:32:23.096657   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:32:23.096820   77994 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:32:23.106161   77994 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0717 18:32:23.106530   77994 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:23.107007   77994 main.go:141] libmachine: Using API Version  1
	I0717 18:32:23.107023   77994 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:23.107300   77994 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:23.107445   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:32:23.108998   77994 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:32:23.109166   77994 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:32:23.109175   77994 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:32:23.109187   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:32:23.111274   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:32:23.111551   77994 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:32:23.111571   77994 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:32:23.111728   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:32:23.111877   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:32:23.112017   77994 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:32:23.112106   77994 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:32:23.295935   77994 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:32:23.296022   77994 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 18:32:23.388927   77994 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:32:23.431711   77994 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:32:23.850201   77994 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0717 18:32:23.851409   77994 node_ready.go:35] waiting up to 6m0s for node "embed-certs-527415" to be "Ready" ...
	I0717 18:32:23.863182   77994 node_ready.go:49] node "embed-certs-527415" has status "Ready":"True"
	I0717 18:32:23.863208   77994 node_ready.go:38] duration metric: took 11.769585ms for node "embed-certs-527415" to be "Ready" ...
	I0717 18:32:23.863219   77994 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:23.878221   77994 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:23.922366   77994 pod_ready.go:92] pod "etcd-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:23.922397   77994 pod_ready.go:81] duration metric: took 44.145148ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:23.922412   77994 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:23.972286   77994 pod_ready.go:92] pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:23.972317   77994 pod_ready.go:81] duration metric: took 49.896346ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:23.972332   77994 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:24.004551   77994 pod_ready.go:92] pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:24.004584   77994 pod_ready.go:81] duration metric: took 32.243425ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:24.004600   77994 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:24.259424   77994 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:24.259454   77994 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:32:24.259453   77994 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:24.259472   77994 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:32:24.259854   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:32:24.259862   77994 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:24.259875   77994 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:24.259883   77994 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:24.259892   77994 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:32:24.259892   77994 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:24.259955   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:32:24.259972   77994 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:24.260042   77994 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:24.260077   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:32:24.260119   77994 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:32:24.260145   77994 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:24.260163   77994 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:24.260503   77994 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:32:24.260567   77994 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:24.260690   77994 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:24.273688   77994 main.go:141] libmachine: Making call to close driver server
	I0717 18:32:24.273713   77994 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:32:24.273996   77994 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:32:24.274011   77994 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:32:24.276422   77994 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 18:32:24.506526   64770 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (21.559697462s)
	I0717 18:32:24.506598   64770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:32:24.522465   64770 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:32:24.532133   64770 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:32:24.544821   64770 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:32:24.544845   64770 kubeadm.go:157] found existing configuration files:
	
	I0717 18:32:24.544897   64770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:32:24.554424   64770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:32:24.554488   64770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:32:24.566237   64770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:32:24.575272   64770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:32:24.575334   64770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:32:24.584999   64770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:32:24.593607   64770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:32:24.593669   64770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:32:24.602671   64770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:32:24.614348   64770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:32:24.614410   64770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:32:24.626954   64770 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:32:24.684529   64770 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:32:24.684607   64770 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:32:24.829772   64770 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:32:24.829896   64770 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:32:24.830052   64770 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:32:25.042058   64770 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:32:25.043848   64770 out.go:204]   - Generating certificates and keys ...
	I0717 18:32:25.043957   64770 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:32:25.044053   64770 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:32:25.044179   64770 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:32:25.044269   64770 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:32:25.044369   64770 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:32:25.044458   64770 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:32:25.044530   64770 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:32:25.044640   64770 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:32:25.044744   64770 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:32:25.044856   64770 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:32:25.044915   64770 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:32:25.045017   64770 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:32:25.133990   64770 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:32:25.333240   64770 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:32:25.496733   64770 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:32:25.669974   64770 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:32:25.748419   64770 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:32:25.748921   64770 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:32:25.751254   64770 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:32:25.752949   64770 out.go:204]   - Booting up control plane ...
	I0717 18:32:25.753065   64770 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:32:25.753188   64770 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:32:25.753300   64770 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:32:25.773041   64770 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:32:25.774016   64770 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:32:25.774075   64770 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:32:24.277689   77994 addons.go:510] duration metric: took 1.225784419s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0717 18:32:24.353967   77994 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-527415" context rescaled to 1 replicas
	I0717 18:32:25.510657   77994 pod_ready.go:92] pod "kube-proxy-jltfs" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:25.510700   77994 pod_ready.go:81] duration metric: took 1.506082868s for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:25.510712   77994 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:25.515157   77994 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:25.515190   77994 pod_ready.go:81] duration metric: took 4.469793ms for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:25.515199   77994 pod_ready.go:38] duration metric: took 1.651968378s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:25.515216   77994 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:32:25.515265   77994 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:32:25.530170   77994 api_server.go:72] duration metric: took 2.478333128s to wait for apiserver process to appear ...
	I0717 18:32:25.530195   77994 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:32:25.530213   77994 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:32:25.535348   77994 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:32:25.536289   77994 api_server.go:141] control plane version: v1.30.2
	I0717 18:32:25.536309   77994 api_server.go:131] duration metric: took 6.106885ms to wait for apiserver health ...
	I0717 18:32:25.536318   77994 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:32:25.657797   77994 system_pods.go:59] 7 kube-system pods found
	I0717 18:32:25.657831   77994 system_pods.go:61] "coredns-7db6d8ff4d-2fnlb" [86d50e9b-fb88-4332-90c5-a969b0654635] Running
	I0717 18:32:25.657838   77994 system_pods.go:61] "etcd-embed-certs-527415" [9d8ac0a8-4639-48d8-8ac4-88b0bd1e2082] Running
	I0717 18:32:25.657844   77994 system_pods.go:61] "kube-apiserver-embed-certs-527415" [7f72c4f9-f1db-4ac6-83e1-2b94245107c9] Running
	I0717 18:32:25.657851   77994 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [96081a97-2a90-4fec-84cb-9a399a43aeb4] Running
	I0717 18:32:25.657857   77994 system_pods.go:61] "kube-proxy-jltfs" [27f6259e-80cc-4881-bb06-6a2ad529179c] Running
	I0717 18:32:25.657862   77994 system_pods.go:61] "kube-scheduler-embed-certs-527415" [bed7b515-7ab0-460c-a13f-037f29576f30] Running
	I0717 18:32:25.657867   77994 system_pods.go:61] "storage-provisioner" [ccb34b69-d28d-477e-8c7a-0acdc547bec7] Running
	I0717 18:32:25.657874   77994 system_pods.go:74] duration metric: took 121.550087ms to wait for pod list to return data ...
	I0717 18:32:25.657885   77994 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:32:25.854953   77994 default_sa.go:45] found service account: "default"
	I0717 18:32:25.854985   77994 default_sa.go:55] duration metric: took 197.091585ms for default service account to be created ...
	I0717 18:32:25.854994   77994 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:32:26.058082   77994 system_pods.go:86] 7 kube-system pods found
	I0717 18:32:26.058107   77994 system_pods.go:89] "coredns-7db6d8ff4d-2fnlb" [86d50e9b-fb88-4332-90c5-a969b0654635] Running
	I0717 18:32:26.058112   77994 system_pods.go:89] "etcd-embed-certs-527415" [9d8ac0a8-4639-48d8-8ac4-88b0bd1e2082] Running
	I0717 18:32:26.058116   77994 system_pods.go:89] "kube-apiserver-embed-certs-527415" [7f72c4f9-f1db-4ac6-83e1-2b94245107c9] Running
	I0717 18:32:26.058120   77994 system_pods.go:89] "kube-controller-manager-embed-certs-527415" [96081a97-2a90-4fec-84cb-9a399a43aeb4] Running
	I0717 18:32:26.058124   77994 system_pods.go:89] "kube-proxy-jltfs" [27f6259e-80cc-4881-bb06-6a2ad529179c] Running
	I0717 18:32:26.058128   77994 system_pods.go:89] "kube-scheduler-embed-certs-527415" [bed7b515-7ab0-460c-a13f-037f29576f30] Running
	I0717 18:32:26.058131   77994 system_pods.go:89] "storage-provisioner" [ccb34b69-d28d-477e-8c7a-0acdc547bec7] Running
	I0717 18:32:26.058137   77994 system_pods.go:126] duration metric: took 203.139243ms to wait for k8s-apps to be running ...
	I0717 18:32:26.058144   77994 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:32:26.058184   77994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:32:26.072008   77994 system_svc.go:56] duration metric: took 13.857466ms WaitForService to wait for kubelet
	I0717 18:32:26.072029   77994 kubeadm.go:582] duration metric: took 3.020194343s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:32:26.072053   77994 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:32:26.256016   77994 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:32:26.256045   77994 node_conditions.go:123] node cpu capacity is 2
	I0717 18:32:26.256059   77994 node_conditions.go:105] duration metric: took 183.999929ms to run NodePressure ...
	I0717 18:32:26.256070   77994 start.go:241] waiting for startup goroutines ...
	I0717 18:32:26.256076   77994 start.go:246] waiting for cluster config update ...
	I0717 18:32:26.256086   77994 start.go:255] writing updated cluster config ...
	I0717 18:32:26.256362   77994 ssh_runner.go:195] Run: rm -f paused
	I0717 18:32:26.309934   77994 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:32:26.311896   77994 out.go:177] * Done! kubectl is now configured to use "embed-certs-527415" cluster and "default" namespace by default
	I0717 18:32:26.326787   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:28.327057   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:25.906961   64770 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:32:25.907084   64770 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:32:26.908851   64770 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001768612s
	I0717 18:32:26.908965   64770 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:32:31.410170   64770 kubeadm.go:310] [api-check] The API server is healthy after 4.501210398s
	I0717 18:32:31.423141   64770 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:32:31.437827   64770 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:32:31.459779   64770 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:32:31.460045   64770 kubeadm.go:310] [mark-control-plane] Marking the node pause-371172 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:32:31.470275   64770 kubeadm.go:310] [bootstrap-token] Using token: 5jyj9a.o3rmgl5b7o1vg2ev
	I0717 18:32:31.471766   64770 out.go:204]   - Configuring RBAC rules ...
	I0717 18:32:31.471898   64770 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:32:31.478042   64770 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:32:31.491995   64770 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:32:31.499200   64770 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:32:31.502657   64770 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:32:31.505464   64770 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:32:31.820754   64770 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:32:32.246760   64770 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:32:32.821662   64770 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:32:32.822716   64770 kubeadm.go:310] 
	I0717 18:32:32.822787   64770 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:32:32.822799   64770 kubeadm.go:310] 
	I0717 18:32:32.822911   64770 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:32:32.822936   64770 kubeadm.go:310] 
	I0717 18:32:32.822972   64770 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:32:32.823052   64770 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:32:32.823123   64770 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:32:32.823134   64770 kubeadm.go:310] 
	I0717 18:32:32.823204   64770 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:32:32.823214   64770 kubeadm.go:310] 
	I0717 18:32:32.823288   64770 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:32:32.823302   64770 kubeadm.go:310] 
	I0717 18:32:32.823367   64770 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:32:32.823462   64770 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:32:32.823548   64770 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:32:32.823557   64770 kubeadm.go:310] 
	I0717 18:32:32.823681   64770 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:32:32.823795   64770 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:32:32.823811   64770 kubeadm.go:310] 
	I0717 18:32:32.823904   64770 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5jyj9a.o3rmgl5b7o1vg2ev \
	I0717 18:32:32.824022   64770 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:32:32.824052   64770 kubeadm.go:310] 	--control-plane 
	I0717 18:32:32.824058   64770 kubeadm.go:310] 
	I0717 18:32:32.824156   64770 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:32:32.824166   64770 kubeadm.go:310] 
	I0717 18:32:32.824259   64770 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5jyj9a.o3rmgl5b7o1vg2ev \
	I0717 18:32:32.824372   64770 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:32:32.825050   64770 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:32:32.825081   64770 cni.go:84] Creating CNI manager for ""
	I0717 18:32:32.825091   64770 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:32:32.826796   64770 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:32:30.826334   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:32.827864   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:32.828019   64770 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:32:32.838128   64770 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:32:32.855649   64770 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:32:32.855717   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:32.855756   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-371172 minikube.k8s.io/updated_at=2024_07_17T18_32_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=pause-371172 minikube.k8s.io/primary=true
	I0717 18:32:32.892253   64770 ops.go:34] apiserver oom_adj: -16
	I0717 18:32:32.955417   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:33.455643   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:33.955923   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:34.455692   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:34.955577   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:35.456396   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:35.326762   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:37.328053   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:35.956455   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:36.455679   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:36.956189   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:37.455691   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:37.955711   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:38.455564   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:38.955808   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:39.455805   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:39.955504   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:40.455927   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:39.827074   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:42.327739   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:40.955576   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:41.456147   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:41.955780   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:42.455962   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:42.955917   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:43.456077   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:43.955935   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:44.456127   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:44.956361   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:45.456107   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:45.956243   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:46.456443   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:46.956212   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:47.455607   64770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:32:47.554718   64770 kubeadm.go:1113] duration metric: took 14.699058976s to wait for elevateKubeSystemPrivileges
	I0717 18:32:47.554754   64770 kubeadm.go:394] duration metric: took 5m26.289545826s to StartCluster
	I0717 18:32:47.554774   64770 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:47.554859   64770 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:32:47.556276   64770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:32:47.556540   64770 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:32:47.556599   64770 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:32:47.556761   64770 config.go:182] Loaded profile config "pause-371172": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:32:47.558184   64770 out.go:177] * Verifying Kubernetes components...
	I0717 18:32:47.559039   64770 out.go:177] * Enabled addons: 
	I0717 18:32:44.826544   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:47.326337   76391 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:32:49.327760   76391 pod_ready.go:92] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.327780   76391 pod_ready.go:81] duration metric: took 36.007794739s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.327788   76391 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.332810   76391 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.332837   76391 pod_ready.go:81] duration metric: took 5.041956ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.332850   76391 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.337104   76391 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.337124   76391 pod_ready.go:81] duration metric: took 4.266061ms for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.337133   76391 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.342354   76391 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.342372   76391 pod_ready.go:81] duration metric: took 5.231615ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.342382   76391 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.346851   76391 pod_ready.go:92] pod "kube-proxy-tn5xn" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.346867   76391 pod_ready.go:81] duration metric: took 4.471918ms for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.346876   76391 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.724382   76391 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.724415   76391 pod_ready.go:81] duration metric: took 377.530235ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.724427   76391 pod_ready.go:38] duration metric: took 36.920360552s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:49.724443   76391 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:32:49.724502   76391 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:32:49.739919   76391 api_server.go:72] duration metric: took 37.439039525s to wait for apiserver process to appear ...
	I0717 18:32:49.739941   76391 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:32:49.739957   76391 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:32:49.744304   76391 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:32:49.745279   76391 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:32:49.745298   76391 api_server.go:131] duration metric: took 5.350779ms to wait for apiserver health ...
	I0717 18:32:49.745305   76391 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:32:49.928037   76391 system_pods.go:59] 7 kube-system pods found
	I0717 18:32:49.928084   76391 system_pods.go:61] "coredns-5cfdc65f69-spj2w" [6849b651-9346-4d96-97a7-88eca7bbd50a] Running
	I0717 18:32:49.928091   76391 system_pods.go:61] "etcd-no-preload-066175" [be012488-220b-421d-bf16-a3623fafb8fa] Running
	I0717 18:32:49.928097   76391 system_pods.go:61] "kube-apiserver-no-preload-066175" [4292a786-61f3-405d-8784-ec8a58e1b124] Running
	I0717 18:32:49.928102   76391 system_pods.go:61] "kube-controller-manager-no-preload-066175" [937a48f4-7fca-4cee-bb50-51f1720960da] Running
	I0717 18:32:49.928106   76391 system_pods.go:61] "kube-proxy-tn5xn" [f0a910b3-98b6-470f-a5a2-e49369ecb733] Running
	I0717 18:32:49.928116   76391 system_pods.go:61] "kube-scheduler-no-preload-066175" [ffa2475c-7a5a-4988-89a2-4727e07356cb] Running
	I0717 18:32:49.928120   76391 system_pods.go:61] "storage-provisioner" [19914ecc-2fcc-4cb8-bd78-fb6891dcf85d] Running
	I0717 18:32:49.928128   76391 system_pods.go:74] duration metric: took 182.816852ms to wait for pod list to return data ...
	I0717 18:32:49.928136   76391 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:32:50.125244   76391 default_sa.go:45] found service account: "default"
	I0717 18:32:50.125274   76391 default_sa.go:55] duration metric: took 197.131625ms for default service account to be created ...
	I0717 18:32:50.125284   76391 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:32:50.327165   76391 system_pods.go:86] 7 kube-system pods found
	I0717 18:32:50.327192   76391 system_pods.go:89] "coredns-5cfdc65f69-spj2w" [6849b651-9346-4d96-97a7-88eca7bbd50a] Running
	I0717 18:32:50.327197   76391 system_pods.go:89] "etcd-no-preload-066175" [be012488-220b-421d-bf16-a3623fafb8fa] Running
	I0717 18:32:50.327201   76391 system_pods.go:89] "kube-apiserver-no-preload-066175" [4292a786-61f3-405d-8784-ec8a58e1b124] Running
	I0717 18:32:50.327205   76391 system_pods.go:89] "kube-controller-manager-no-preload-066175" [937a48f4-7fca-4cee-bb50-51f1720960da] Running
	I0717 18:32:50.327209   76391 system_pods.go:89] "kube-proxy-tn5xn" [f0a910b3-98b6-470f-a5a2-e49369ecb733] Running
	I0717 18:32:50.327213   76391 system_pods.go:89] "kube-scheduler-no-preload-066175" [ffa2475c-7a5a-4988-89a2-4727e07356cb] Running
	I0717 18:32:50.327216   76391 system_pods.go:89] "storage-provisioner" [19914ecc-2fcc-4cb8-bd78-fb6891dcf85d] Running
	I0717 18:32:50.327222   76391 system_pods.go:126] duration metric: took 201.933585ms to wait for k8s-apps to be running ...
	I0717 18:32:50.327227   76391 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:32:50.327272   76391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:32:50.341672   76391 system_svc.go:56] duration metric: took 14.434151ms WaitForService to wait for kubelet
	I0717 18:32:50.341703   76391 kubeadm.go:582] duration metric: took 38.040827725s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:32:50.341724   76391 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:32:50.525046   76391 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:32:50.525074   76391 node_conditions.go:123] node cpu capacity is 2
	I0717 18:32:50.525085   76391 node_conditions.go:105] duration metric: took 183.356783ms to run NodePressure ...
	I0717 18:32:50.525095   76391 start.go:241] waiting for startup goroutines ...
	I0717 18:32:50.525106   76391 start.go:246] waiting for cluster config update ...
	I0717 18:32:50.525115   76391 start.go:255] writing updated cluster config ...
	I0717 18:32:50.525370   76391 ssh_runner.go:195] Run: rm -f paused
	I0717 18:32:50.572889   76391 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 18:32:50.574822   76391 out.go:177] * Done! kubectl is now configured to use "no-preload-066175" cluster and "default" namespace by default
	I0717 18:32:47.560038   64770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:32:47.560806   64770 addons.go:510] duration metric: took 4.212164ms for enable addons: enabled=[]
	I0717 18:32:47.732445   64770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:32:47.765302   64770 node_ready.go:35] waiting up to 6m0s for node "pause-371172" to be "Ready" ...
	I0717 18:32:47.773105   64770 node_ready.go:49] node "pause-371172" has status "Ready":"True"
	I0717 18:32:47.773124   64770 node_ready.go:38] duration metric: took 7.786324ms for node "pause-371172" to be "Ready" ...
	I0717 18:32:47.773132   64770 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:47.780749   64770 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-884nf" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.294878   64770 pod_ready.go:92] pod "coredns-7db6d8ff4d-884nf" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.294901   64770 pod_ready.go:81] duration metric: took 1.514125468s for pod "coredns-7db6d8ff4d-884nf" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.294910   64770 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fds59" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.305093   64770 pod_ready.go:92] pod "coredns-7db6d8ff4d-fds59" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.305114   64770 pod_ready.go:81] duration metric: took 10.197745ms for pod "coredns-7db6d8ff4d-fds59" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.305125   64770 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.310353   64770 pod_ready.go:92] pod "etcd-pause-371172" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.310376   64770 pod_ready.go:81] duration metric: took 5.245469ms for pod "etcd-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.310384   64770 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.315575   64770 pod_ready.go:92] pod "kube-apiserver-pause-371172" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.315595   64770 pod_ready.go:81] duration metric: took 5.20478ms for pod "kube-apiserver-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.315604   64770 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.368580   64770 pod_ready.go:92] pod "kube-controller-manager-pause-371172" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.368604   64770 pod_ready.go:81] duration metric: took 52.994204ms for pod "kube-controller-manager-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.368616   64770 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m9svn" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.769496   64770 pod_ready.go:92] pod "kube-proxy-m9svn" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:49.769516   64770 pod_ready.go:81] duration metric: took 400.894448ms for pod "kube-proxy-m9svn" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:49.769529   64770 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:50.170101   64770 pod_ready.go:92] pod "kube-scheduler-pause-371172" in "kube-system" namespace has status "Ready":"True"
	I0717 18:32:50.170121   64770 pod_ready.go:81] duration metric: took 400.586022ms for pod "kube-scheduler-pause-371172" in "kube-system" namespace to be "Ready" ...
	I0717 18:32:50.170130   64770 pod_ready.go:38] duration metric: took 2.396988581s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:32:50.170143   64770 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:32:50.170187   64770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:32:50.187214   64770 api_server.go:72] duration metric: took 2.630643931s to wait for apiserver process to appear ...
	I0717 18:32:50.187234   64770 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:32:50.187250   64770 api_server.go:253] Checking apiserver healthz at https://192.168.50.21:8443/healthz ...
	I0717 18:32:50.193392   64770 api_server.go:279] https://192.168.50.21:8443/healthz returned 200:
	ok
	I0717 18:32:50.194490   64770 api_server.go:141] control plane version: v1.30.2
	I0717 18:32:50.194514   64770 api_server.go:131] duration metric: took 7.272389ms to wait for apiserver health ...
	I0717 18:32:50.194523   64770 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:32:50.371172   64770 system_pods.go:59] 7 kube-system pods found
	I0717 18:32:50.371200   64770 system_pods.go:61] "coredns-7db6d8ff4d-884nf" [27cac9c3-742d-416c-a281-0aaf074fbd3a] Running
	I0717 18:32:50.371205   64770 system_pods.go:61] "coredns-7db6d8ff4d-fds59" [753107be-ccbf-431f-8a2e-e79bdb96f7c4] Running
	I0717 18:32:50.371209   64770 system_pods.go:61] "etcd-pause-371172" [40b5faff-c706-4a73-8a4b-b71a85a6360f] Running
	I0717 18:32:50.371212   64770 system_pods.go:61] "kube-apiserver-pause-371172" [fa9bc423-2462-4ede-ab92-3cc052996937] Running
	I0717 18:32:50.371216   64770 system_pods.go:61] "kube-controller-manager-pause-371172" [62f978f8-ea27-438e-9632-b7367c7054c4] Running
	I0717 18:32:50.371219   64770 system_pods.go:61] "kube-proxy-m9svn" [9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e] Running
	I0717 18:32:50.371222   64770 system_pods.go:61] "kube-scheduler-pause-371172" [7974024d-6422-42eb-a8d7-f21d57cfe807] Running
	I0717 18:32:50.371227   64770 system_pods.go:74] duration metric: took 176.697366ms to wait for pod list to return data ...
	I0717 18:32:50.371234   64770 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:32:50.569599   64770 default_sa.go:45] found service account: "default"
	I0717 18:32:50.569629   64770 default_sa.go:55] duration metric: took 198.388656ms for default service account to be created ...
	I0717 18:32:50.569646   64770 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:32:50.771808   64770 system_pods.go:86] 7 kube-system pods found
	I0717 18:32:50.771838   64770 system_pods.go:89] "coredns-7db6d8ff4d-884nf" [27cac9c3-742d-416c-a281-0aaf074fbd3a] Running
	I0717 18:32:50.771846   64770 system_pods.go:89] "coredns-7db6d8ff4d-fds59" [753107be-ccbf-431f-8a2e-e79bdb96f7c4] Running
	I0717 18:32:50.771852   64770 system_pods.go:89] "etcd-pause-371172" [40b5faff-c706-4a73-8a4b-b71a85a6360f] Running
	I0717 18:32:50.771858   64770 system_pods.go:89] "kube-apiserver-pause-371172" [fa9bc423-2462-4ede-ab92-3cc052996937] Running
	I0717 18:32:50.771864   64770 system_pods.go:89] "kube-controller-manager-pause-371172" [62f978f8-ea27-438e-9632-b7367c7054c4] Running
	I0717 18:32:50.771870   64770 system_pods.go:89] "kube-proxy-m9svn" [9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e] Running
	I0717 18:32:50.771877   64770 system_pods.go:89] "kube-scheduler-pause-371172" [7974024d-6422-42eb-a8d7-f21d57cfe807] Running
	I0717 18:32:50.771886   64770 system_pods.go:126] duration metric: took 202.233078ms to wait for k8s-apps to be running ...
	I0717 18:32:50.771898   64770 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:32:50.771938   64770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:32:50.786825   64770 system_svc.go:56] duration metric: took 14.917593ms WaitForService to wait for kubelet
	I0717 18:32:50.786857   64770 kubeadm.go:582] duration metric: took 3.23028737s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:32:50.786880   64770 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:32:50.969667   64770 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:32:50.969689   64770 node_conditions.go:123] node cpu capacity is 2
	I0717 18:32:50.969697   64770 node_conditions.go:105] duration metric: took 182.808234ms to run NodePressure ...
	I0717 18:32:50.969707   64770 start.go:241] waiting for startup goroutines ...
	I0717 18:32:50.969713   64770 start.go:246] waiting for cluster config update ...
	I0717 18:32:50.969720   64770 start.go:255] writing updated cluster config ...
	I0717 18:32:50.970016   64770 ssh_runner.go:195] Run: rm -f paused
	I0717 18:32:51.017830   64770 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:32:51.019761   64770 out.go:177] * Done! kubectl is now configured to use "pause-371172" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.452837866Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241173452813667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62622b34-c0b0-4887-9f6d-49e2d2023e19 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.453399862Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bba83dad-d1b9-47fc-82e9-515ec53abc15 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.453461603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bba83dad-d1b9-47fc-82e9-515ec53abc15 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.453745725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b1e482d1c8ef316e529644708a390e6e7f46dc5f9b2a3272f391471372039b,PodSandboxId:44ffd8512f7ade9b1821a6405a025b08a7faade2182bb67cbf7ed33b961a60ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168292272767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fds59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753107be-ccbf-431f-8a2e-e79bdb96f7c4,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9edc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922a7e0f262a8282d9e72f42fdeca7478428c833dbeb2a9b95a5738d1ef95e69,PodSandboxId:4c4408303c67164222da84a7bf59e287a06e4fc94ed1085a051669523e55e20d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168214407421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-884nf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 27cac9c3-742d-416c-a281-0aaf074fbd3a,},Annotations:map[string]string{io.kubernetes.container.hash: ed3513db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26db6113dcb1ff79fcd77d6d39b46c69b8761312bf5238a27ffd2e11eda174f7,PodSandboxId:06c5f342d1f79b4c2d91bf5328ab371a7f226ef280618ba0f1d3990c7d0c6c34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,Cre
atedAt:1721241167795198780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9svn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 4dd93799,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b08dd954d955c74b4b84f21646fa33facb15fc2c1e53a68975c3187779cc6a29,PodSandboxId:ad26dc0b8c7874b3a7bbc2e23810502f05e3201a2756f197b8bd1d96e6efa775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241147345888833,La
bels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6af54c2061de253ace2de68751df8da5,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1a2aa0f51d772dc4abbce1d3004d6b52f7961de71561a8776ab799c79b8df0,PodSandboxId:560487aab164542ad8417325db6c3c052cb855002b2abbf25560b824f4736d5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241147342168856,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fa797dfdfb736f9e861ba1561f2f58,},Annotations:map[string]string{io.kubernetes.container.hash: 7731edf5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82ec9b207bfd3ffea2c95fc2e155c8e565236b4b1b904baaab96e556de26fe77,PodSandboxId:58f953c6498c815f64e7e72954faa60fe9e485ad07173c9fe959e57055ceffec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241147319076750,Labels:map[string]string{io.kubernetes.container.name: kube-
controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac1aa9fed9b42ec68485013aa64c8d2,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64e3a55717be3283a0695169654d5d905bfebf0b9f499df4ed4bf6766596ea1,PodSandboxId:e5d77122fae676106ac8f266d61cf0116d8b98a826602e4cac2ad55e8ef3a286,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241147237628865,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:406735d310893ae4eeec2b9b969cff1442005eab3956fac313fbf5545470e815,PodSandboxId:f6d12725dc8e4ef65263bba54f4f8d6cea4b89d3899c69d1156a4e7191ba39f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721240863934863408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bba83dad-d1b9-47fc-82e9-515ec53abc15 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.487526677Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5713a6a5-ecc6-4db6-aaba-75763c7fe26f name=/runtime.v1.RuntimeService/Version
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.487614957Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5713a6a5-ecc6-4db6-aaba-75763c7fe26f name=/runtime.v1.RuntimeService/Version
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.488520999Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b57d9b91-529d-4157-a879-e27035613f3c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.488874874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241173488852520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b57d9b91-529d-4157-a879-e27035613f3c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.489401918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49b69453-897a-4e28-aa79-9b7c2730da82 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.489467918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49b69453-897a-4e28-aa79-9b7c2730da82 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.489694385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b1e482d1c8ef316e529644708a390e6e7f46dc5f9b2a3272f391471372039b,PodSandboxId:44ffd8512f7ade9b1821a6405a025b08a7faade2182bb67cbf7ed33b961a60ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168292272767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fds59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753107be-ccbf-431f-8a2e-e79bdb96f7c4,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9edc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922a7e0f262a8282d9e72f42fdeca7478428c833dbeb2a9b95a5738d1ef95e69,PodSandboxId:4c4408303c67164222da84a7bf59e287a06e4fc94ed1085a051669523e55e20d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168214407421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-884nf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 27cac9c3-742d-416c-a281-0aaf074fbd3a,},Annotations:map[string]string{io.kubernetes.container.hash: ed3513db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26db6113dcb1ff79fcd77d6d39b46c69b8761312bf5238a27ffd2e11eda174f7,PodSandboxId:06c5f342d1f79b4c2d91bf5328ab371a7f226ef280618ba0f1d3990c7d0c6c34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,Cre
atedAt:1721241167795198780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9svn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 4dd93799,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b08dd954d955c74b4b84f21646fa33facb15fc2c1e53a68975c3187779cc6a29,PodSandboxId:ad26dc0b8c7874b3a7bbc2e23810502f05e3201a2756f197b8bd1d96e6efa775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241147345888833,La
bels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6af54c2061de253ace2de68751df8da5,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1a2aa0f51d772dc4abbce1d3004d6b52f7961de71561a8776ab799c79b8df0,PodSandboxId:560487aab164542ad8417325db6c3c052cb855002b2abbf25560b824f4736d5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241147342168856,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fa797dfdfb736f9e861ba1561f2f58,},Annotations:map[string]string{io.kubernetes.container.hash: 7731edf5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82ec9b207bfd3ffea2c95fc2e155c8e565236b4b1b904baaab96e556de26fe77,PodSandboxId:58f953c6498c815f64e7e72954faa60fe9e485ad07173c9fe959e57055ceffec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241147319076750,Labels:map[string]string{io.kubernetes.container.name: kube-
controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac1aa9fed9b42ec68485013aa64c8d2,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64e3a55717be3283a0695169654d5d905bfebf0b9f499df4ed4bf6766596ea1,PodSandboxId:e5d77122fae676106ac8f266d61cf0116d8b98a826602e4cac2ad55e8ef3a286,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241147237628865,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:406735d310893ae4eeec2b9b969cff1442005eab3956fac313fbf5545470e815,PodSandboxId:f6d12725dc8e4ef65263bba54f4f8d6cea4b89d3899c69d1156a4e7191ba39f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721240863934863408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49b69453-897a-4e28-aa79-9b7c2730da82 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.526614351Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01b89505-1e7e-408a-b0af-6d9d44d33c50 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.526697295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01b89505-1e7e-408a-b0af-6d9d44d33c50 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.527696336Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0133228f-713f-4a07-8eec-1cebdfcbdb09 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.528064935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241173528043103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0133228f-713f-4a07-8eec-1cebdfcbdb09 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.528584234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f394d97-7714-47f0-9bd5-55a6554d92fd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.528650538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f394d97-7714-47f0-9bd5-55a6554d92fd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.528829202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b1e482d1c8ef316e529644708a390e6e7f46dc5f9b2a3272f391471372039b,PodSandboxId:44ffd8512f7ade9b1821a6405a025b08a7faade2182bb67cbf7ed33b961a60ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168292272767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fds59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753107be-ccbf-431f-8a2e-e79bdb96f7c4,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9edc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922a7e0f262a8282d9e72f42fdeca7478428c833dbeb2a9b95a5738d1ef95e69,PodSandboxId:4c4408303c67164222da84a7bf59e287a06e4fc94ed1085a051669523e55e20d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168214407421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-884nf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 27cac9c3-742d-416c-a281-0aaf074fbd3a,},Annotations:map[string]string{io.kubernetes.container.hash: ed3513db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26db6113dcb1ff79fcd77d6d39b46c69b8761312bf5238a27ffd2e11eda174f7,PodSandboxId:06c5f342d1f79b4c2d91bf5328ab371a7f226ef280618ba0f1d3990c7d0c6c34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,Cre
atedAt:1721241167795198780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9svn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 4dd93799,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b08dd954d955c74b4b84f21646fa33facb15fc2c1e53a68975c3187779cc6a29,PodSandboxId:ad26dc0b8c7874b3a7bbc2e23810502f05e3201a2756f197b8bd1d96e6efa775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241147345888833,La
bels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6af54c2061de253ace2de68751df8da5,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1a2aa0f51d772dc4abbce1d3004d6b52f7961de71561a8776ab799c79b8df0,PodSandboxId:560487aab164542ad8417325db6c3c052cb855002b2abbf25560b824f4736d5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241147342168856,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fa797dfdfb736f9e861ba1561f2f58,},Annotations:map[string]string{io.kubernetes.container.hash: 7731edf5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82ec9b207bfd3ffea2c95fc2e155c8e565236b4b1b904baaab96e556de26fe77,PodSandboxId:58f953c6498c815f64e7e72954faa60fe9e485ad07173c9fe959e57055ceffec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241147319076750,Labels:map[string]string{io.kubernetes.container.name: kube-
controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac1aa9fed9b42ec68485013aa64c8d2,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64e3a55717be3283a0695169654d5d905bfebf0b9f499df4ed4bf6766596ea1,PodSandboxId:e5d77122fae676106ac8f266d61cf0116d8b98a826602e4cac2ad55e8ef3a286,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241147237628865,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:406735d310893ae4eeec2b9b969cff1442005eab3956fac313fbf5545470e815,PodSandboxId:f6d12725dc8e4ef65263bba54f4f8d6cea4b89d3899c69d1156a4e7191ba39f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721240863934863408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f394d97-7714-47f0-9bd5-55a6554d92fd name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.568925453Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=78e8665a-834e-4c5b-a337-c5ad121a487d name=/runtime.v1.RuntimeService/Version
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.569023153Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=78e8665a-834e-4c5b-a337-c5ad121a487d name=/runtime.v1.RuntimeService/Version
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.570046865Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20ef8a45-3ac0-42a7-8058-d2b7879fd330 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.570619591Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721241173570595321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124362,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20ef8a45-3ac0-42a7-8058-d2b7879fd330 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.571221896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfc702fb-cfc0-4f39-bf2b-6b81bbe64c5c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.571326421Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfc702fb-cfc0-4f39-bf2b-6b81bbe64c5c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:32:53 pause-371172 crio[2861]: time="2024-07-17 18:32:53.571510246Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6b1e482d1c8ef316e529644708a390e6e7f46dc5f9b2a3272f391471372039b,PodSandboxId:44ffd8512f7ade9b1821a6405a025b08a7faade2182bb67cbf7ed33b961a60ff,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168292272767,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fds59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 753107be-ccbf-431f-8a2e-e79bdb96f7c4,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9edc4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:922a7e0f262a8282d9e72f42fdeca7478428c833dbeb2a9b95a5738d1ef95e69,PodSandboxId:4c4408303c67164222da84a7bf59e287a06e4fc94ed1085a051669523e55e20d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241168214407421,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-884nf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 27cac9c3-742d-416c-a281-0aaf074fbd3a,},Annotations:map[string]string{io.kubernetes.container.hash: ed3513db,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26db6113dcb1ff79fcd77d6d39b46c69b8761312bf5238a27ffd2e11eda174f7,PodSandboxId:06c5f342d1f79b4c2d91bf5328ab371a7f226ef280618ba0f1d3990c7d0c6c34,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,Cre
atedAt:1721241167795198780,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9svn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e,},Annotations:map[string]string{io.kubernetes.container.hash: 4dd93799,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b08dd954d955c74b4b84f21646fa33facb15fc2c1e53a68975c3187779cc6a29,PodSandboxId:ad26dc0b8c7874b3a7bbc2e23810502f05e3201a2756f197b8bd1d96e6efa775,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241147345888833,La
bels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6af54c2061de253ace2de68751df8da5,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d1a2aa0f51d772dc4abbce1d3004d6b52f7961de71561a8776ab799c79b8df0,PodSandboxId:560487aab164542ad8417325db6c3c052cb855002b2abbf25560b824f4736d5d,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241147342168856,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fa797dfdfb736f9e861ba1561f2f58,},Annotations:map[string]string{io.kubernetes.container.hash: 7731edf5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82ec9b207bfd3ffea2c95fc2e155c8e565236b4b1b904baaab96e556de26fe77,PodSandboxId:58f953c6498c815f64e7e72954faa60fe9e485ad07173c9fe959e57055ceffec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241147319076750,Labels:map[string]string{io.kubernetes.container.name: kube-
controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac1aa9fed9b42ec68485013aa64c8d2,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64e3a55717be3283a0695169654d5d905bfebf0b9f499df4ed4bf6766596ea1,PodSandboxId:e5d77122fae676106ac8f266d61cf0116d8b98a826602e4cac2ad55e8ef3a286,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241147237628865,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:406735d310893ae4eeec2b9b969cff1442005eab3956fac313fbf5545470e815,PodSandboxId:f6d12725dc8e4ef65263bba54f4f8d6cea4b89d3899c69d1156a4e7191ba39f0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721240863934863408,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubern
etes.pod.name: kube-apiserver-pause-371172,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f33796bd8651c39cb8d969eedf52c7e,},Annotations:map[string]string{io.kubernetes.container.hash: 641cb56e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfc702fb-cfc0-4f39-bf2b-6b81bbe64c5c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e6b1e482d1c8e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   5 seconds ago       Running             coredns                   0                   44ffd8512f7ad       coredns-7db6d8ff4d-fds59
	922a7e0f262a8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   5 seconds ago       Running             coredns                   0                   4c4408303c671       coredns-7db6d8ff4d-884nf
	26db6113dcb1f       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   5 seconds ago       Running             kube-proxy                0                   06c5f342d1f79       kube-proxy-m9svn
	b08dd954d955c       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   26 seconds ago      Running             kube-scheduler            3                   ad26dc0b8c787       kube-scheduler-pause-371172
	5d1a2aa0f51d7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   26 seconds ago      Running             etcd                      4                   560487aab1645       etcd-pause-371172
	82ec9b207bfd3       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   26 seconds ago      Running             kube-controller-manager   3                   58f953c6498c8       kube-controller-manager-pause-371172
	b64e3a55717be       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   26 seconds ago      Running             kube-apiserver            4                   e5d77122fae67       kube-apiserver-pause-371172
	406735d310893       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   5 minutes ago       Exited              kube-apiserver            3                   f6d12725dc8e4       kube-apiserver-pause-371172
	
	
	==> coredns [922a7e0f262a8282d9e72f42fdeca7478428c833dbeb2a9b95a5738d1ef95e69] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e6b1e482d1c8ef316e529644708a390e6e7f46dc5f9b2a3272f391471372039b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               pause-371172
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-371172
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=pause-371172
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_32_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:32:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-371172
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:32:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:32:52 +0000   Wed, 17 Jul 2024 18:32:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:32:52 +0000   Wed, 17 Jul 2024 18:32:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:32:52 +0000   Wed, 17 Jul 2024 18:32:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:32:52 +0000   Wed, 17 Jul 2024 18:32:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.21
	  Hostname:    pause-371172
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 a804fcb9ba4a45f09b8de2e5b44edb1b
	  System UUID:                a804fcb9-ba4a-45f0-9b8d-e2e5b44edb1b
	  Boot ID:                    65b9b303-293e-45ff-9c83-dc6d6afb7884
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-884nf                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6s
	  kube-system                 coredns-7db6d8ff4d-fds59                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6s
	  kube-system                 etcd-pause-371172                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         21s
	  kube-system                 kube-apiserver-pause-371172             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kube-system                 kube-controller-manager-pause-371172    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kube-system                 kube-proxy-m9svn                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kube-scheduler-pause-371172             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (12%!)(MISSING)  340Mi (17%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-371172 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-371172 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-371172 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s                kubelet          Node pause-371172 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s                kubelet          Node pause-371172 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s                kubelet          Node pause-371172 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7s                 node-controller  Node pause-371172 event: Registered Node pause-371172 in Controller
	
	
	==> dmesg <==
	[  +4.191979] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +5.159806] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.068107] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.020814] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.081460] kauditd_printk_skb: 69 callbacks suppressed
	[ +15.293657] systemd-fstab-generator[1517]: Ignoring "noauto" option for root device
	[  +0.110867] kauditd_printk_skb: 21 callbacks suppressed
	[Jul17 18:25] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.081167] systemd-fstab-generator[2624]: Ignoring "noauto" option for root device
	[  +0.188988] systemd-fstab-generator[2657]: Ignoring "noauto" option for root device
	[  +0.217452] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.204109] systemd-fstab-generator[2692]: Ignoring "noauto" option for root device
	[  +0.403525] systemd-fstab-generator[2723]: Ignoring "noauto" option for root device
	[Jul17 18:27] systemd-fstab-generator[2968]: Ignoring "noauto" option for root device
	[  +0.078165] kauditd_printk_skb: 174 callbacks suppressed
	[  +5.973819] kauditd_printk_skb: 60 callbacks suppressed
	[  +5.452425] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.507164] systemd-fstab-generator[3722]: Ignoring "noauto" option for root device
	[  +0.744433] kauditd_printk_skb: 23 callbacks suppressed
	[Jul17 18:32] kauditd_printk_skb: 5 callbacks suppressed
	[ +17.493888] systemd-fstab-generator[5353]: Ignoring "noauto" option for root device
	[  +6.052760] systemd-fstab-generator[5682]: Ignoring "noauto" option for root device
	[  +0.077055] kauditd_printk_skb: 63 callbacks suppressed
	[ +15.676874] systemd-fstab-generator[5894]: Ignoring "noauto" option for root device
	[  +0.090241] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [5d1a2aa0f51d772dc4abbce1d3004d6b52f7961de71561a8776ab799c79b8df0] <==
	{"level":"info","ts":"2024-07-17T18:32:27.686609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab switched to configuration voters=(7747864092090557611)"}
	{"level":"info","ts":"2024-07-17T18:32:27.686797Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f04757488c993a3","local-member-id":"6b85f157810fe4ab","added-peer-id":"6b85f157810fe4ab","added-peer-peer-urls":["https://192.168.50.21:2380"]}
	{"level":"info","ts":"2024-07-17T18:32:27.725953Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T18:32:27.726138Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6b85f157810fe4ab","initial-advertise-peer-urls":["https://192.168.50.21:2380"],"listen-peer-urls":["https://192.168.50.21:2380"],"advertise-client-urls":["https://192.168.50.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.21:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T18:32:27.726168Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T18:32:27.726309Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.21:2380"}
	{"level":"info","ts":"2024-07-17T18:32:27.726325Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.21:2380"}
	{"level":"info","ts":"2024-07-17T18:32:27.745281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T18:32:27.74532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T18:32:27.745338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab received MsgPreVoteResp from 6b85f157810fe4ab at term 1"}
	{"level":"info","ts":"2024-07-17T18:32:27.745349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:32:27.745354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab received MsgVoteResp from 6b85f157810fe4ab at term 2"}
	{"level":"info","ts":"2024-07-17T18:32:27.745362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab became leader at term 2"}
	{"level":"info","ts":"2024-07-17T18:32:27.745369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b85f157810fe4ab elected leader 6b85f157810fe4ab at term 2"}
	{"level":"info","ts":"2024-07-17T18:32:27.749379Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:32:27.751531Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6b85f157810fe4ab","local-member-attributes":"{Name:pause-371172 ClientURLs:[https://192.168.50.21:2379]}","request-path":"/0/members/6b85f157810fe4ab/attributes","cluster-id":"6f04757488c993a3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:32:27.751768Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:32:27.755208Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:32:27.75531Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T18:32:27.755445Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:32:27.756647Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f04757488c993a3","local-member-id":"6b85f157810fe4ab","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:32:27.758365Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:32:27.75846Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:32:27.772945Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.21:2379"}
	{"level":"info","ts":"2024-07-17T18:32:27.773195Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:32:53 up 8 min,  0 users,  load average: 1.30, 0.60, 0.30
	Linux pause-371172 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [406735d310893ae4eeec2b9b969cff1442005eab3956fac313fbf5545470e815] <==
	W0717 18:32:23.246800       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.301172       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.306281       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.338399       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.356652       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.360321       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.372871       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.434763       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.480717       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.532284       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.554339       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.555599       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.573558       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.587512       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.592372       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.636154       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.675087       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.678982       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.680371       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.692810       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.698627       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.701153       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.713521       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.817426       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:32:23.824387       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b64e3a55717be3283a0695169654d5d905bfebf0b9f499df4ed4bf6766596ea1] <==
	I0717 18:32:29.858429       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 18:32:29.858437       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 18:32:29.858443       1 cache.go:39] Caches are synced for autoregister controller
	E0717 18:32:29.882698       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0717 18:32:29.887079       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0717 18:32:29.896652       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 18:32:29.906284       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 18:32:29.906362       1 policy_source.go:224] refreshing policies
	I0717 18:32:29.931718       1 controller.go:615] quota admission added evaluator for: namespaces
	I0717 18:32:30.103510       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 18:32:30.721559       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 18:32:30.726005       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 18:32:30.726034       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 18:32:31.284411       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 18:32:31.323719       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 18:32:31.456449       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0717 18:32:31.470761       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.21]
	I0717 18:32:31.471673       1 controller.go:615] quota admission added evaluator for: endpoints
	I0717 18:32:31.484375       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 18:32:31.802215       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0717 18:32:32.200900       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0717 18:32:32.213102       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0717 18:32:32.221493       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0717 18:32:46.914208       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0717 18:32:47.363716       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [82ec9b207bfd3ffea2c95fc2e155c8e565236b4b1b904baaab96e556de26fe77] <==
	I0717 18:32:46.414294       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 18:32:46.419308       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 18:32:46.420628       1 shared_informer.go:320] Caches are synced for node
	I0717 18:32:46.420701       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0717 18:32:46.420738       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0717 18:32:46.420765       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0717 18:32:46.420787       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0717 18:32:46.428872       1 shared_informer.go:320] Caches are synced for PV protection
	I0717 18:32:46.437530       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="pause-371172" podCIDRs=["10.244.0.0/24"]
	I0717 18:32:46.464568       1 shared_informer.go:320] Caches are synced for namespace
	I0717 18:32:46.565351       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 18:32:46.565826       1 shared_informer.go:320] Caches are synced for attach detach
	I0717 18:32:46.612086       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0717 18:32:47.044559       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 18:32:47.061288       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 18:32:47.061320       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 18:32:47.596349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="674.93572ms"
	I0717 18:32:47.612328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.852255ms"
	I0717 18:32:47.619450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.551µs"
	I0717 18:32:47.626492       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.893µs"
	I0717 18:32:49.200776       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="150.216µs"
	I0717 18:32:49.229844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="11.036747ms"
	I0717 18:32:49.232747       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="252.969µs"
	I0717 18:32:49.258068       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.132606ms"
	I0717 18:32:49.258669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="128.832µs"
	
	
	==> kube-proxy [26db6113dcb1ff79fcd77d6d39b46c69b8761312bf5238a27ffd2e11eda174f7] <==
	I0717 18:32:47.973451       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:32:47.995721       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.21"]
	I0717 18:32:48.069590       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:32:48.069633       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:32:48.069649       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:32:48.071892       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:32:48.072115       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:32:48.072133       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:32:48.073887       1 config.go:192] "Starting service config controller"
	I0717 18:32:48.073915       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:32:48.073940       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:32:48.073945       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:32:48.074854       1 config.go:319] "Starting node config controller"
	I0717 18:32:48.074922       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:32:48.174477       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:32:48.174546       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:32:48.175711       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b08dd954d955c74b4b84f21646fa33facb15fc2c1e53a68975c3187779cc6a29] <==
	W0717 18:32:29.836736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:32:29.836757       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:32:29.840683       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:32:29.840720       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:32:30.643070       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:32:30.643123       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 18:32:30.721585       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:32:30.721666       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:32:30.750188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:32:30.750652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 18:32:30.810024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:32:30.810157       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:32:30.835514       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:32:30.835623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 18:32:30.910117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:32:30.910279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:32:30.936393       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:32:30.937381       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 18:32:31.072785       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:32:31.072892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 18:32:31.080318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:32:31.080395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:32:31.086975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:32:31.087049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0717 18:32:32.931522       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 18:32:32 pause-371172 kubelet[5689]: I0717 18:32:32.371841    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/52fa797dfdfb736f9e861ba1561f2f58-etcd-certs\") pod \"etcd-pause-371172\" (UID: \"52fa797dfdfb736f9e861ba1561f2f58\") " pod="kube-system/etcd-pause-371172"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: I0717 18:32:33.048281    5689 apiserver.go:52] "Watching apiserver"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: I0717 18:32:33.070450    5689 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: E0717 18:32:33.149534    5689 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-pause-371172\" already exists" pod="kube-system/kube-controller-manager-pause-371172"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: E0717 18:32:33.150377    5689 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-371172\" already exists" pod="kube-system/kube-apiserver-pause-371172"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: I0717 18:32:33.167751    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-371172" podStartSLOduration=1.167717223 podStartE2EDuration="1.167717223s" podCreationTimestamp="2024-07-17 18:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:33.157701122 +0000 UTC m=+1.191363528" watchObservedRunningTime="2024-07-17 18:32:33.167717223 +0000 UTC m=+1.201379631"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: I0717 18:32:33.178509    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-371172" podStartSLOduration=1.178492126 podStartE2EDuration="1.178492126s" podCreationTimestamp="2024-07-17 18:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:33.168310041 +0000 UTC m=+1.201972442" watchObservedRunningTime="2024-07-17 18:32:33.178492126 +0000 UTC m=+1.212154532"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: I0717 18:32:33.189707    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-371172" podStartSLOduration=1.18969162 podStartE2EDuration="1.18969162s" podCreationTimestamp="2024-07-17 18:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:33.178977418 +0000 UTC m=+1.212639809" watchObservedRunningTime="2024-07-17 18:32:33.18969162 +0000 UTC m=+1.223354022"
	Jul 17 18:32:33 pause-371172 kubelet[5689]: I0717 18:32:33.190379    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-371172" podStartSLOduration=1.19036118 podStartE2EDuration="1.19036118s" podCreationTimestamp="2024-07-17 18:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:33.190218444 +0000 UTC m=+1.223880835" watchObservedRunningTime="2024-07-17 18:32:33.19036118 +0000 UTC m=+1.224023597"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.383821    5689 topology_manager.go:215] "Topology Admit Handler" podUID="9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e" podNamespace="kube-system" podName="kube-proxy-m9svn"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.473816    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e-lib-modules\") pod \"kube-proxy-m9svn\" (UID: \"9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e\") " pod="kube-system/kube-proxy-m9svn"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.473867    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e-kube-proxy\") pod \"kube-proxy-m9svn\" (UID: \"9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e\") " pod="kube-system/kube-proxy-m9svn"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.473888    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e-xtables-lock\") pod \"kube-proxy-m9svn\" (UID: \"9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e\") " pod="kube-system/kube-proxy-m9svn"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.473904    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6glv\" (UniqueName: \"kubernetes.io/projected/9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e-kube-api-access-x6glv\") pod \"kube-proxy-m9svn\" (UID: \"9b38634f-58b2-48f1-bcd2-bae4fb1f5e7e\") " pod="kube-system/kube-proxy-m9svn"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.544001    5689 topology_manager.go:215] "Topology Admit Handler" podUID="27cac9c3-742d-416c-a281-0aaf074fbd3a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-884nf"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.574603    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27cac9c3-742d-416c-a281-0aaf074fbd3a-config-volume\") pod \"coredns-7db6d8ff4d-884nf\" (UID: \"27cac9c3-742d-416c-a281-0aaf074fbd3a\") " pod="kube-system/coredns-7db6d8ff4d-884nf"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.574652    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zpgd\" (UniqueName: \"kubernetes.io/projected/27cac9c3-742d-416c-a281-0aaf074fbd3a-kube-api-access-8zpgd\") pod \"coredns-7db6d8ff4d-884nf\" (UID: \"27cac9c3-742d-416c-a281-0aaf074fbd3a\") " pod="kube-system/coredns-7db6d8ff4d-884nf"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.587531    5689 topology_manager.go:215] "Topology Admit Handler" podUID="753107be-ccbf-431f-8a2e-e79bdb96f7c4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fds59"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.675621    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvjl\" (UniqueName: \"kubernetes.io/projected/753107be-ccbf-431f-8a2e-e79bdb96f7c4-kube-api-access-vpvjl\") pod \"coredns-7db6d8ff4d-fds59\" (UID: \"753107be-ccbf-431f-8a2e-e79bdb96f7c4\") " pod="kube-system/coredns-7db6d8ff4d-fds59"
	Jul 17 18:32:47 pause-371172 kubelet[5689]: I0717 18:32:47.675861    5689 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/753107be-ccbf-431f-8a2e-e79bdb96f7c4-config-volume\") pod \"coredns-7db6d8ff4d-fds59\" (UID: \"753107be-ccbf-431f-8a2e-e79bdb96f7c4\") " pod="kube-system/coredns-7db6d8ff4d-fds59"
	Jul 17 18:32:49 pause-371172 kubelet[5689]: I0717 18:32:49.197719    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m9svn" podStartSLOduration=2.197699881 podStartE2EDuration="2.197699881s" podCreationTimestamp="2024-07-17 18:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:48.203973941 +0000 UTC m=+16.237636350" watchObservedRunningTime="2024-07-17 18:32:49.197699881 +0000 UTC m=+17.231362282"
	Jul 17 18:32:49 pause-371172 kubelet[5689]: I0717 18:32:49.216502    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fds59" podStartSLOduration=2.216480674 podStartE2EDuration="2.216480674s" podCreationTimestamp="2024-07-17 18:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:49.198948443 +0000 UTC m=+17.232610852" watchObservedRunningTime="2024-07-17 18:32:49.216480674 +0000 UTC m=+17.250143083"
	Jul 17 18:32:49 pause-371172 kubelet[5689]: I0717 18:32:49.242969    5689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-884nf" podStartSLOduration=2.242911771 podStartE2EDuration="2.242911771s" podCreationTimestamp="2024-07-17 18:32:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-17 18:32:49.21760968 +0000 UTC m=+17.251272090" watchObservedRunningTime="2024-07-17 18:32:49.242911771 +0000 UTC m=+17.276574177"
	Jul 17 18:32:52 pause-371172 kubelet[5689]: I0717 18:32:52.748722    5689 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 18:32:52 pause-371172 kubelet[5689]: I0717 18:32:52.749607    5689 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-371172 -n pause-371172
helpers_test.go:261: (dbg) Run:  kubectl --context pause-371172 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (438.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (267.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-019549 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0717 18:30:41.791780   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-019549 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m27.201164285s)

                                                
                                                
-- stdout --
	* [old-k8s-version-019549] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-019549" primary control-plane node in "old-k8s-version-019549" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:30:39.690065   74819 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:30:39.690190   74819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:30:39.690202   74819 out.go:304] Setting ErrFile to fd 2...
	I0717 18:30:39.690206   74819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:30:39.690467   74819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:30:39.691197   74819 out.go:298] Setting JSON to false
	I0717 18:30:39.692418   74819 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7983,"bootTime":1721233057,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:30:39.692475   74819 start.go:139] virtualization: kvm guest
	I0717 18:30:39.694812   74819 out.go:177] * [old-k8s-version-019549] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:30:39.696294   74819 notify.go:220] Checking for updates...
	I0717 18:30:39.696301   74819 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:30:39.697659   74819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:30:39.699051   74819 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:30:39.700450   74819 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:30:39.701992   74819 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:30:39.703369   74819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:30:39.705069   74819 config.go:182] Loaded profile config "enable-default-cni-235476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:30:39.705159   74819 config.go:182] Loaded profile config "flannel-235476": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:30:39.705270   74819 config.go:182] Loaded profile config "pause-371172": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:30:39.705342   74819 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:30:39.742890   74819 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 18:30:39.744171   74819 start.go:297] selected driver: kvm2
	I0717 18:30:39.744186   74819 start.go:901] validating driver "kvm2" against <nil>
	I0717 18:30:39.744195   74819 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:30:39.744920   74819 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:30:39.745072   74819 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:30:39.760510   74819 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:30:39.760559   74819 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 18:30:39.760771   74819 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:30:39.760796   74819 cni.go:84] Creating CNI manager for ""
	I0717 18:30:39.760804   74819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:30:39.760813   74819 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 18:30:39.760861   74819 start.go:340] cluster config:
	{Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:30:39.760991   74819 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:30:39.762860   74819 out.go:177] * Starting "old-k8s-version-019549" primary control-plane node in "old-k8s-version-019549" cluster
	I0717 18:30:39.764094   74819 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:30:39.764144   74819 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 18:30:39.764156   74819 cache.go:56] Caching tarball of preloaded images
	I0717 18:30:39.764249   74819 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:30:39.764264   74819 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 18:30:39.764396   74819 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/config.json ...
	I0717 18:30:39.764430   74819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/config.json: {Name:mk0415238db4f30840385a6cddecf79c168d41d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:30:39.764588   74819 start.go:360] acquireMachinesLock for old-k8s-version-019549: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:30:39.764621   74819 start.go:364] duration metric: took 17.904µs to acquireMachinesLock for "old-k8s-version-019549"
	I0717 18:30:39.764640   74819 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:30:39.764723   74819 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 18:30:39.766169   74819 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 18:30:39.766305   74819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:30:39.766353   74819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:30:39.780556   74819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38823
	I0717 18:30:39.781020   74819 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:30:39.781553   74819 main.go:141] libmachine: Using API Version  1
	I0717 18:30:39.781574   74819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:30:39.781949   74819 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:30:39.782189   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:30:39.782350   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:30:39.782511   74819 start.go:159] libmachine.API.Create for "old-k8s-version-019549" (driver="kvm2")
	I0717 18:30:39.782538   74819 client.go:168] LocalClient.Create starting
	I0717 18:30:39.782578   74819 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 18:30:39.782616   74819 main.go:141] libmachine: Decoding PEM data...
	I0717 18:30:39.782633   74819 main.go:141] libmachine: Parsing certificate...
	I0717 18:30:39.782687   74819 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 18:30:39.782704   74819 main.go:141] libmachine: Decoding PEM data...
	I0717 18:30:39.782717   74819 main.go:141] libmachine: Parsing certificate...
	I0717 18:30:39.782731   74819 main.go:141] libmachine: Running pre-create checks...
	I0717 18:30:39.782744   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .PreCreateCheck
	I0717 18:30:39.783151   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetConfigRaw
	I0717 18:30:39.783613   74819 main.go:141] libmachine: Creating machine...
	I0717 18:30:39.783629   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .Create
	I0717 18:30:39.783763   74819 main.go:141] libmachine: (old-k8s-version-019549) Creating KVM machine...
	I0717 18:30:39.785160   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found existing default KVM network
	I0717 18:30:39.786770   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:39.786641   74843 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0d0}
	I0717 18:30:39.786791   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | created network xml: 
	I0717 18:30:39.786802   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | <network>
	I0717 18:30:39.786814   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG |   <name>mk-old-k8s-version-019549</name>
	I0717 18:30:39.786822   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG |   <dns enable='no'/>
	I0717 18:30:39.786829   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG |   
	I0717 18:30:39.786838   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 18:30:39.786849   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG |     <dhcp>
	I0717 18:30:39.786855   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 18:30:39.786863   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG |     </dhcp>
	I0717 18:30:39.786868   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG |   </ip>
	I0717 18:30:39.786874   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG |   
	I0717 18:30:39.786880   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | </network>
	I0717 18:30:39.786891   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | 
	I0717 18:30:39.791942   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | trying to create private KVM network mk-old-k8s-version-019549 192.168.39.0/24...
	I0717 18:30:39.871303   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | private KVM network mk-old-k8s-version-019549 192.168.39.0/24 created
	I0717 18:30:39.871335   74819 main.go:141] libmachine: (old-k8s-version-019549) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549 ...
	I0717 18:30:39.871352   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:39.871271   74843 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:30:39.871376   74819 main.go:141] libmachine: (old-k8s-version-019549) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 18:30:39.871517   74819 main.go:141] libmachine: (old-k8s-version-019549) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 18:30:40.134523   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:40.134415   74843 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa...
	I0717 18:30:40.380691   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:40.380573   74843 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/old-k8s-version-019549.rawdisk...
	I0717 18:30:40.380733   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Writing magic tar header
	I0717 18:30:40.380749   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Writing SSH key tar header
	I0717 18:30:40.380761   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:40.380687   74843 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549 ...
	I0717 18:30:40.380843   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549
	I0717 18:30:40.380876   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 18:30:40.380891   74819 main.go:141] libmachine: (old-k8s-version-019549) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549 (perms=drwx------)
	I0717 18:30:40.380904   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:30:40.380918   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 18:30:40.380930   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 18:30:40.380959   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Checking permissions on dir: /home/jenkins
	I0717 18:30:40.380980   74819 main.go:141] libmachine: (old-k8s-version-019549) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 18:30:40.380989   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Checking permissions on dir: /home
	I0717 18:30:40.381002   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Skipping /home - not owner
	I0717 18:30:40.381024   74819 main.go:141] libmachine: (old-k8s-version-019549) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 18:30:40.381039   74819 main.go:141] libmachine: (old-k8s-version-019549) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 18:30:40.381062   74819 main.go:141] libmachine: (old-k8s-version-019549) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 18:30:40.381078   74819 main.go:141] libmachine: (old-k8s-version-019549) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 18:30:40.381092   74819 main.go:141] libmachine: (old-k8s-version-019549) Creating domain...
	I0717 18:30:40.382108   74819 main.go:141] libmachine: (old-k8s-version-019549) define libvirt domain using xml: 
	I0717 18:30:40.382140   74819 main.go:141] libmachine: (old-k8s-version-019549) <domain type='kvm'>
	I0717 18:30:40.382151   74819 main.go:141] libmachine: (old-k8s-version-019549)   <name>old-k8s-version-019549</name>
	I0717 18:30:40.382169   74819 main.go:141] libmachine: (old-k8s-version-019549)   <memory unit='MiB'>2200</memory>
	I0717 18:30:40.382178   74819 main.go:141] libmachine: (old-k8s-version-019549)   <vcpu>2</vcpu>
	I0717 18:30:40.382185   74819 main.go:141] libmachine: (old-k8s-version-019549)   <features>
	I0717 18:30:40.382194   74819 main.go:141] libmachine: (old-k8s-version-019549)     <acpi/>
	I0717 18:30:40.382200   74819 main.go:141] libmachine: (old-k8s-version-019549)     <apic/>
	I0717 18:30:40.382212   74819 main.go:141] libmachine: (old-k8s-version-019549)     <pae/>
	I0717 18:30:40.382226   74819 main.go:141] libmachine: (old-k8s-version-019549)     
	I0717 18:30:40.382237   74819 main.go:141] libmachine: (old-k8s-version-019549)   </features>
	I0717 18:30:40.382246   74819 main.go:141] libmachine: (old-k8s-version-019549)   <cpu mode='host-passthrough'>
	I0717 18:30:40.382270   74819 main.go:141] libmachine: (old-k8s-version-019549)   
	I0717 18:30:40.382290   74819 main.go:141] libmachine: (old-k8s-version-019549)   </cpu>
	I0717 18:30:40.382298   74819 main.go:141] libmachine: (old-k8s-version-019549)   <os>
	I0717 18:30:40.382306   74819 main.go:141] libmachine: (old-k8s-version-019549)     <type>hvm</type>
	I0717 18:30:40.382314   74819 main.go:141] libmachine: (old-k8s-version-019549)     <boot dev='cdrom'/>
	I0717 18:30:40.382322   74819 main.go:141] libmachine: (old-k8s-version-019549)     <boot dev='hd'/>
	I0717 18:30:40.382331   74819 main.go:141] libmachine: (old-k8s-version-019549)     <bootmenu enable='no'/>
	I0717 18:30:40.382340   74819 main.go:141] libmachine: (old-k8s-version-019549)   </os>
	I0717 18:30:40.382348   74819 main.go:141] libmachine: (old-k8s-version-019549)   <devices>
	I0717 18:30:40.382359   74819 main.go:141] libmachine: (old-k8s-version-019549)     <disk type='file' device='cdrom'>
	I0717 18:30:40.382373   74819 main.go:141] libmachine: (old-k8s-version-019549)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/boot2docker.iso'/>
	I0717 18:30:40.382383   74819 main.go:141] libmachine: (old-k8s-version-019549)       <target dev='hdc' bus='scsi'/>
	I0717 18:30:40.382391   74819 main.go:141] libmachine: (old-k8s-version-019549)       <readonly/>
	I0717 18:30:40.382400   74819 main.go:141] libmachine: (old-k8s-version-019549)     </disk>
	I0717 18:30:40.382409   74819 main.go:141] libmachine: (old-k8s-version-019549)     <disk type='file' device='disk'>
	I0717 18:30:40.382417   74819 main.go:141] libmachine: (old-k8s-version-019549)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 18:30:40.382430   74819 main.go:141] libmachine: (old-k8s-version-019549)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/old-k8s-version-019549.rawdisk'/>
	I0717 18:30:40.382442   74819 main.go:141] libmachine: (old-k8s-version-019549)       <target dev='hda' bus='virtio'/>
	I0717 18:30:40.382470   74819 main.go:141] libmachine: (old-k8s-version-019549)     </disk>
	I0717 18:30:40.382486   74819 main.go:141] libmachine: (old-k8s-version-019549)     <interface type='network'>
	I0717 18:30:40.382498   74819 main.go:141] libmachine: (old-k8s-version-019549)       <source network='mk-old-k8s-version-019549'/>
	I0717 18:30:40.382510   74819 main.go:141] libmachine: (old-k8s-version-019549)       <model type='virtio'/>
	I0717 18:30:40.382519   74819 main.go:141] libmachine: (old-k8s-version-019549)     </interface>
	I0717 18:30:40.382530   74819 main.go:141] libmachine: (old-k8s-version-019549)     <interface type='network'>
	I0717 18:30:40.382540   74819 main.go:141] libmachine: (old-k8s-version-019549)       <source network='default'/>
	I0717 18:30:40.382550   74819 main.go:141] libmachine: (old-k8s-version-019549)       <model type='virtio'/>
	I0717 18:30:40.382560   74819 main.go:141] libmachine: (old-k8s-version-019549)     </interface>
	I0717 18:30:40.382575   74819 main.go:141] libmachine: (old-k8s-version-019549)     <serial type='pty'>
	I0717 18:30:40.382587   74819 main.go:141] libmachine: (old-k8s-version-019549)       <target port='0'/>
	I0717 18:30:40.382595   74819 main.go:141] libmachine: (old-k8s-version-019549)     </serial>
	I0717 18:30:40.382608   74819 main.go:141] libmachine: (old-k8s-version-019549)     <console type='pty'>
	I0717 18:30:40.382630   74819 main.go:141] libmachine: (old-k8s-version-019549)       <target type='serial' port='0'/>
	I0717 18:30:40.382641   74819 main.go:141] libmachine: (old-k8s-version-019549)     </console>
	I0717 18:30:40.382649   74819 main.go:141] libmachine: (old-k8s-version-019549)     <rng model='virtio'>
	I0717 18:30:40.382663   74819 main.go:141] libmachine: (old-k8s-version-019549)       <backend model='random'>/dev/random</backend>
	I0717 18:30:40.382671   74819 main.go:141] libmachine: (old-k8s-version-019549)     </rng>
	I0717 18:30:40.382679   74819 main.go:141] libmachine: (old-k8s-version-019549)     
	I0717 18:30:40.382690   74819 main.go:141] libmachine: (old-k8s-version-019549)     
	I0717 18:30:40.382699   74819 main.go:141] libmachine: (old-k8s-version-019549)   </devices>
	I0717 18:30:40.382706   74819 main.go:141] libmachine: (old-k8s-version-019549) </domain>
	I0717 18:30:40.382716   74819 main.go:141] libmachine: (old-k8s-version-019549) 
	I0717 18:30:40.387455   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:39:81:22 in network default
	I0717 18:30:40.388071   74819 main.go:141] libmachine: (old-k8s-version-019549) Ensuring networks are active...
	I0717 18:30:40.388092   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:40.389001   74819 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network default is active
	I0717 18:30:40.389444   74819 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network mk-old-k8s-version-019549 is active
	I0717 18:30:40.390149   74819 main.go:141] libmachine: (old-k8s-version-019549) Getting domain xml...
	I0717 18:30:40.391076   74819 main.go:141] libmachine: (old-k8s-version-019549) Creating domain...
	I0717 18:30:41.686632   74819 main.go:141] libmachine: (old-k8s-version-019549) Waiting to get IP...
	I0717 18:30:41.687806   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:41.688357   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:41.688391   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:41.688307   74843 retry.go:31] will retry after 274.627341ms: waiting for machine to come up
	I0717 18:30:41.964736   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:41.965352   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:41.965379   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:41.965306   74843 retry.go:31] will retry after 313.517425ms: waiting for machine to come up
	I0717 18:30:42.280887   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:42.281456   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:42.281478   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:42.281413   74843 retry.go:31] will retry after 445.034782ms: waiting for machine to come up
	I0717 18:30:42.727833   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:42.728326   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:42.728375   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:42.728318   74843 retry.go:31] will retry after 520.980617ms: waiting for machine to come up
	I0717 18:30:43.251154   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:43.251737   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:43.251780   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:43.251690   74843 retry.go:31] will retry after 465.647984ms: waiting for machine to come up
	I0717 18:30:43.719302   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:43.719805   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:43.719825   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:43.719769   74843 retry.go:31] will retry after 808.348544ms: waiting for machine to come up
	I0717 18:30:44.529290   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:44.529740   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:44.529794   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:44.529684   74843 retry.go:31] will retry after 734.124194ms: waiting for machine to come up
	I0717 18:30:45.265134   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:45.265892   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:45.265937   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:45.265854   74843 retry.go:31] will retry after 939.260076ms: waiting for machine to come up
	I0717 18:30:46.206989   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:46.207448   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:46.207474   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:46.207390   74843 retry.go:31] will retry after 1.148269743s: waiting for machine to come up
	I0717 18:30:47.357904   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:47.358568   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:47.358596   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:47.358514   74843 retry.go:31] will retry after 1.959579957s: waiting for machine to come up
	I0717 18:30:49.319977   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:49.320500   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:49.320526   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:49.320428   74843 retry.go:31] will retry after 1.868161384s: waiting for machine to come up
	I0717 18:30:51.190160   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:51.190697   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:51.190729   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:51.190655   74843 retry.go:31] will retry after 3.577464874s: waiting for machine to come up
	I0717 18:30:54.772338   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:54.772797   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:54.772827   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:54.772748   74843 retry.go:31] will retry after 2.92588429s: waiting for machine to come up
	I0717 18:30:57.699684   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:30:57.700328   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:30:57.700358   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:30:57.700237   74843 retry.go:31] will retry after 4.440032289s: waiting for machine to come up
	I0717 18:31:02.142136   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.142739   74819 main.go:141] libmachine: (old-k8s-version-019549) Found IP for machine: 192.168.39.128
	I0717 18:31:02.142763   74819 main.go:141] libmachine: (old-k8s-version-019549) Reserving static IP address...
	I0717 18:31:02.142773   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has current primary IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.143160   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"} in network mk-old-k8s-version-019549
	I0717 18:31:02.219233   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Getting to WaitForSSH function...
	I0717 18:31:02.219259   74819 main.go:141] libmachine: (old-k8s-version-019549) Reserved static IP address: 192.168.39.128
	I0717 18:31:02.219273   74819 main.go:141] libmachine: (old-k8s-version-019549) Waiting for SSH to be available...
	I0717 18:31:02.222083   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.222579   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:minikube Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:02.222605   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.222749   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH client type: external
	I0717 18:31:02.222777   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa (-rw-------)
	I0717 18:31:02.222821   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:31:02.222841   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | About to run SSH command:
	I0717 18:31:02.222856   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | exit 0
	I0717 18:31:02.348741   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | SSH cmd err, output: <nil>: 
	I0717 18:31:02.348979   74819 main.go:141] libmachine: (old-k8s-version-019549) KVM machine creation complete!
	I0717 18:31:02.349402   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetConfigRaw
	I0717 18:31:02.349986   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:31:02.350228   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:31:02.350370   74819 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 18:31:02.350386   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetState
	I0717 18:31:02.351666   74819 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 18:31:02.351682   74819 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 18:31:02.351689   74819 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 18:31:02.351698   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:31:02.353966   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.354335   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:02.354360   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.354509   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:31:02.354685   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:02.354864   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:02.354972   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:31:02.355146   74819 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:02.355368   74819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:31:02.355383   74819 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 18:31:02.456155   74819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:31:02.456179   74819 main.go:141] libmachine: Detecting the provisioner...
	I0717 18:31:02.456189   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:31:02.458977   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.459335   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:02.459372   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.459511   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:31:02.459700   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:02.459879   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:02.459989   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:31:02.460167   74819 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:02.460386   74819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:31:02.460400   74819 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 18:31:02.565568   74819 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 18:31:02.565634   74819 main.go:141] libmachine: found compatible host: buildroot
	I0717 18:31:02.565642   74819 main.go:141] libmachine: Provisioning with buildroot...
	I0717 18:31:02.565656   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:31:02.565905   74819 buildroot.go:166] provisioning hostname "old-k8s-version-019549"
	I0717 18:31:02.565924   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:31:02.566085   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:31:02.568777   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.569236   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:02.569261   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.569413   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:31:02.569572   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:02.569707   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:02.569855   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:31:02.570023   74819 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:02.570233   74819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:31:02.570250   74819 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-019549 && echo "old-k8s-version-019549" | sudo tee /etc/hostname
	I0717 18:31:02.689891   74819 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-019549
	
	I0717 18:31:02.689922   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:31:02.692837   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.693231   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:02.693263   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.693452   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:31:02.693656   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:02.693835   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:02.693985   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:31:02.694156   74819 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:02.694326   74819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:31:02.694341   74819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-019549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-019549/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-019549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:31:02.805698   74819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:31:02.805724   74819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:31:02.805742   74819 buildroot.go:174] setting up certificates
	I0717 18:31:02.805754   74819 provision.go:84] configureAuth start
	I0717 18:31:02.805766   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:31:02.806020   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:31:02.808553   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.808913   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:02.808959   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.809081   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:31:02.811561   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.811859   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:02.811882   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.812030   74819 provision.go:143] copyHostCerts
	I0717 18:31:02.812098   74819 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:31:02.812113   74819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:31:02.812184   74819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:31:02.812326   74819 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:31:02.812340   74819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:31:02.812382   74819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:31:02.812498   74819 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:31:02.812508   74819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:31:02.812535   74819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:31:02.812595   74819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-019549 san=[127.0.0.1 192.168.39.128 localhost minikube old-k8s-version-019549]
	I0717 18:31:02.899579   74819 provision.go:177] copyRemoteCerts
	I0717 18:31:02.899636   74819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:31:02.899658   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:31:02.902547   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.902893   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:02.902923   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:02.903084   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:31:02.903268   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:02.903433   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:31:02.903571   74819 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:31:02.982568   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:31:03.007408   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 18:31:03.031336   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:31:03.053415   74819 provision.go:87] duration metric: took 247.64678ms to configureAuth
	I0717 18:31:03.053448   74819 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:31:03.053637   74819 config.go:182] Loaded profile config "old-k8s-version-019549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 18:31:03.053711   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:31:03.056427   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.056747   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:03.056775   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.056925   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:31:03.057127   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:03.057271   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:03.057420   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:31:03.057596   74819 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:03.057746   74819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:31:03.057759   74819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:31:03.318755   74819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:31:03.318805   74819 main.go:141] libmachine: Checking connection to Docker...
	I0717 18:31:03.318817   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetURL
	I0717 18:31:03.320068   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using libvirt version 6000000
	I0717 18:31:03.322595   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.322951   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:03.322985   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.323167   74819 main.go:141] libmachine: Docker is up and running!
	I0717 18:31:03.323184   74819 main.go:141] libmachine: Reticulating splines...
	I0717 18:31:03.323192   74819 client.go:171] duration metric: took 23.540643514s to LocalClient.Create
	I0717 18:31:03.323213   74819 start.go:167] duration metric: took 23.540701465s to libmachine.API.Create "old-k8s-version-019549"
	I0717 18:31:03.323226   74819 start.go:293] postStartSetup for "old-k8s-version-019549" (driver="kvm2")
	I0717 18:31:03.323240   74819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:31:03.323261   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:31:03.323513   74819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:31:03.323543   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:31:03.325606   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.325960   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:03.325996   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.326116   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:31:03.326287   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:03.326464   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:31:03.326605   74819 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:31:03.406742   74819 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:31:03.410546   74819 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:31:03.410571   74819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:31:03.410637   74819 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:31:03.410727   74819 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:31:03.410846   74819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:31:03.419372   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:31:03.443338   74819 start.go:296] duration metric: took 120.094121ms for postStartSetup
	I0717 18:31:03.443396   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetConfigRaw
	I0717 18:31:03.444076   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:31:03.447503   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.447898   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:03.447932   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.448143   74819 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/config.json ...
	I0717 18:31:03.448347   74819 start.go:128] duration metric: took 23.683613273s to createHost
	I0717 18:31:03.448377   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:31:03.450939   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.451321   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:03.451348   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.451476   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:31:03.451635   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:03.451798   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:03.451942   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:31:03.452083   74819 main.go:141] libmachine: Using SSH client type: native
	I0717 18:31:03.452244   74819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:31:03.452263   74819 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 18:31:03.553662   74819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241063.526693702
	
	I0717 18:31:03.553683   74819 fix.go:216] guest clock: 1721241063.526693702
	I0717 18:31:03.553690   74819 fix.go:229] Guest: 2024-07-17 18:31:03.526693702 +0000 UTC Remote: 2024-07-17 18:31:03.448359739 +0000 UTC m=+23.791903532 (delta=78.333963ms)
	I0717 18:31:03.553713   74819 fix.go:200] guest clock delta is within tolerance: 78.333963ms
	I0717 18:31:03.553720   74819 start.go:83] releasing machines lock for "old-k8s-version-019549", held for 23.789089913s
	I0717 18:31:03.553746   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:31:03.554020   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:31:03.556774   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.557238   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:03.557268   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.557442   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:31:03.557912   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:31:03.558200   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:31:03.558331   74819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:31:03.558387   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:31:03.558475   74819 ssh_runner.go:195] Run: cat /version.json
	I0717 18:31:03.558501   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:31:03.561597   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.561787   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.561877   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:03.561900   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.562025   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:31:03.562209   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:03.562202   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:03.562296   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:03.562420   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:31:03.562439   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:31:03.562630   74819 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:31:03.562644   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:31:03.562802   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:31:03.562949   74819 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:31:03.693184   74819 ssh_runner.go:195] Run: systemctl --version
	I0717 18:31:03.699299   74819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:31:03.860954   74819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:31:03.866362   74819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:31:03.866411   74819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:31:03.881024   74819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:31:03.881048   74819 start.go:495] detecting cgroup driver to use...
	I0717 18:31:03.881111   74819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:31:03.897061   74819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:31:03.910491   74819 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:31:03.910555   74819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:31:03.927395   74819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:31:03.943571   74819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:31:04.078120   74819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:31:04.228524   74819 docker.go:233] disabling docker service ...
	I0717 18:31:04.228579   74819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:31:04.241717   74819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:31:04.253318   74819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:31:04.389696   74819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:31:04.547641   74819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:31:04.561107   74819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:31:04.588440   74819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 18:31:04.588516   74819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:04.598857   74819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:31:04.598920   74819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:04.608383   74819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:04.620382   74819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:31:04.632365   74819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:31:04.645402   74819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:31:04.655819   74819 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:31:04.655873   74819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:31:04.672205   74819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:31:04.683069   74819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:31:04.814694   74819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:31:04.946936   74819 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:31:04.947009   74819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:31:04.952031   74819 start.go:563] Will wait 60s for crictl version
	I0717 18:31:04.952122   74819 ssh_runner.go:195] Run: which crictl
	I0717 18:31:04.955700   74819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:31:04.996629   74819 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:31:04.996719   74819 ssh_runner.go:195] Run: crio --version
	I0717 18:31:05.029675   74819 ssh_runner.go:195] Run: crio --version
	I0717 18:31:05.065966   74819 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 18:31:05.067359   74819 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:31:05.070279   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:05.070826   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:30:54 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:31:05.070853   74819 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:31:05.071093   74819 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:31:05.075313   74819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:31:05.087602   74819 kubeadm.go:883] updating cluster {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:31:05.087732   74819 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:31:05.087798   74819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:31:05.120180   74819 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:31:05.120317   74819 ssh_runner.go:195] Run: which lz4
	I0717 18:31:05.124594   74819 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 18:31:05.128983   74819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:31:05.129014   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 18:31:06.592094   74819 crio.go:462] duration metric: took 1.467531696s to copy over tarball
	I0717 18:31:06.592169   74819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:31:09.250346   74819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.658146052s)
	I0717 18:31:09.250403   74819 crio.go:469] duration metric: took 2.658260674s to extract the tarball
	I0717 18:31:09.250413   74819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:31:09.292840   74819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:31:09.338541   74819 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:31:09.338566   74819 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:31:09.338651   74819 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:31:09.338677   74819 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:31:09.338698   74819 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 18:31:09.338737   74819 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 18:31:09.338659   74819 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:31:09.338699   74819 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:31:09.338652   74819 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:31:09.338677   74819 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:31:09.340798   74819 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:31:09.340867   74819 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:31:09.340798   74819 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:31:09.341176   74819 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:31:09.341203   74819 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 18:31:09.341640   74819 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:31:09.341905   74819 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 18:31:09.342949   74819 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:31:09.573907   74819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:31:09.577260   74819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:31:09.594991   74819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 18:31:09.606235   74819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:31:09.616492   74819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:31:09.621325   74819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 18:31:09.628838   74819 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 18:31:09.628890   74819 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:31:09.628931   74819 ssh_runner.go:195] Run: which crictl
	I0717 18:31:09.632543   74819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 18:31:09.695537   74819 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 18:31:09.695597   74819 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:31:09.695650   74819 ssh_runner.go:195] Run: which crictl
	I0717 18:31:09.744300   74819 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 18:31:09.744333   74819 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 18:31:09.744346   74819 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 18:31:09.744368   74819 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:31:09.744399   74819 ssh_runner.go:195] Run: which crictl
	I0717 18:31:09.744413   74819 ssh_runner.go:195] Run: which crictl
	I0717 18:31:09.756232   74819 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 18:31:09.756245   74819 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 18:31:09.756276   74819 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 18:31:09.756276   74819 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:31:09.756301   74819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:31:09.756313   74819 ssh_runner.go:195] Run: which crictl
	I0717 18:31:09.756319   74819 ssh_runner.go:195] Run: which crictl
	I0717 18:31:09.756407   74819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:31:09.756324   74819 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 18:31:09.756457   74819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 18:31:09.756473   74819 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:31:09.756413   74819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:31:09.756499   74819 ssh_runner.go:195] Run: which crictl
	I0717 18:31:09.842781   74819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 18:31:09.842882   74819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 18:31:09.842921   74819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 18:31:09.842959   74819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:31:09.843020   74819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 18:31:09.843024   74819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 18:31:09.843070   74819 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 18:31:09.903970   74819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 18:31:09.904070   74819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 18:31:09.904094   74819 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 18:31:10.238129   74819 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:31:10.379548   74819 cache_images.go:92] duration metric: took 1.040963855s to LoadCachedImages
	W0717 18:31:10.379626   74819 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0717 18:31:10.379642   74819 kubeadm.go:934] updating node { 192.168.39.128 8443 v1.20.0 crio true true} ...
	I0717 18:31:10.379777   74819 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-019549 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:31:10.379865   74819 ssh_runner.go:195] Run: crio config
	I0717 18:31:10.437791   74819 cni.go:84] Creating CNI manager for ""
	I0717 18:31:10.437812   74819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:31:10.437824   74819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:31:10.437847   74819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-019549 NodeName:old-k8s-version-019549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 18:31:10.438009   74819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-019549"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:31:10.438088   74819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 18:31:10.449956   74819 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:31:10.450013   74819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:31:10.460463   74819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 18:31:10.476547   74819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:31:10.492084   74819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 18:31:10.509445   74819 ssh_runner.go:195] Run: grep 192.168.39.128	control-plane.minikube.internal$ /etc/hosts
	I0717 18:31:10.513251   74819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:31:10.526119   74819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:31:10.677870   74819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:31:10.697202   74819 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549 for IP: 192.168.39.128
	I0717 18:31:10.697234   74819 certs.go:194] generating shared ca certs ...
	I0717 18:31:10.697256   74819 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:10.697433   74819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:31:10.697498   74819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:31:10.697520   74819 certs.go:256] generating profile certs ...
	I0717 18:31:10.697592   74819 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/client.key
	I0717 18:31:10.697615   74819 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/client.crt with IP's: []
	I0717 18:31:10.798057   74819 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/client.crt ...
	I0717 18:31:10.798087   74819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/client.crt: {Name:mkf350edecab257be64f4da587d8ed64e15927ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:10.798256   74819 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/client.key ...
	I0717 18:31:10.798272   74819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/client.key: {Name:mk3f3535f97adf5bab10ce3ef99f5497e1e10ecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:10.798382   74819 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key.9c9b0a7e
	I0717 18:31:10.798414   74819 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.crt.9c9b0a7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.128]
	I0717 18:31:10.963734   74819 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.crt.9c9b0a7e ...
	I0717 18:31:10.963772   74819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.crt.9c9b0a7e: {Name:mk97d4c073e4301fe87beb7d727c35e706ad890a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:10.963986   74819 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key.9c9b0a7e ...
	I0717 18:31:10.964011   74819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key.9c9b0a7e: {Name:mk2be4e280f915f64dab3da0849fe53c325279f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:10.964125   74819 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.crt.9c9b0a7e -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.crt
	I0717 18:31:10.964248   74819 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key.9c9b0a7e -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key
	I0717 18:31:10.964358   74819 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key
	I0717 18:31:10.964384   74819 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.crt with IP's: []
	I0717 18:31:11.060742   74819 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.crt ...
	I0717 18:31:11.060775   74819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.crt: {Name:mk80eaee2913c38fd7107fdb9d5ddd21824fe96a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:11.060976   74819 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key ...
	I0717 18:31:11.060995   74819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key: {Name:mk17d6687cf5ffccba2793e502afa1292fd8c841 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:31:11.061186   74819 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:31:11.061239   74819 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:31:11.061254   74819 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:31:11.061290   74819 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:31:11.061334   74819 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:31:11.061375   74819 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:31:11.061434   74819 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:31:11.062074   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:31:11.087934   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:31:11.112128   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:31:11.135768   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:31:11.158964   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 18:31:11.181513   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:31:11.205179   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:31:11.231072   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:31:11.253263   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:31:11.276704   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:31:11.298682   74819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:31:11.327204   74819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:31:11.343947   74819 ssh_runner.go:195] Run: openssl version
	I0717 18:31:11.349912   74819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:31:11.360185   74819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:31:11.364577   74819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:31:11.364625   74819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:31:11.370278   74819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:31:11.380294   74819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:31:11.390373   74819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:11.394540   74819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:11.394590   74819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:31:11.399795   74819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:31:11.409697   74819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:31:11.420059   74819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:31:11.424212   74819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:31:11.424258   74819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:31:11.429786   74819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:31:11.439630   74819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:31:11.443342   74819 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 18:31:11.443388   74819 kubeadm.go:392] StartCluster: {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:31:11.443459   74819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:31:11.443514   74819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:31:11.487361   74819 cri.go:89] found id: ""
	I0717 18:31:11.487439   74819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:31:11.497497   74819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:31:11.510104   74819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:31:11.520369   74819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:31:11.520390   74819 kubeadm.go:157] found existing configuration files:
	
	I0717 18:31:11.520438   74819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:31:11.532605   74819 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:31:11.532684   74819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:31:11.545212   74819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:31:11.558253   74819 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:31:11.558331   74819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:31:11.570653   74819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:31:11.582392   74819 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:31:11.582477   74819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:31:11.595098   74819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:31:11.609806   74819 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:31:11.609866   74819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:31:11.620105   74819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:31:11.867731   74819 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:33:09.347430   74819 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:33:09.347646   74819 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:33:09.348892   74819 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:33:09.349019   74819 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:33:09.349289   74819 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:33:09.349491   74819 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:33:09.349667   74819 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:33:09.349988   74819 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:33:09.351845   74819 out.go:204]   - Generating certificates and keys ...
	I0717 18:33:09.351936   74819 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:33:09.352013   74819 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:33:09.352103   74819 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:33:09.352189   74819 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:33:09.352283   74819 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:33:09.352357   74819 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 18:33:09.352435   74819 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 18:33:09.352718   74819 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-019549] and IPs [192.168.39.128 127.0.0.1 ::1]
	I0717 18:33:09.352793   74819 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 18:33:09.353014   74819 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-019549] and IPs [192.168.39.128 127.0.0.1 ::1]
	I0717 18:33:09.353107   74819 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:33:09.353179   74819 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:33:09.353231   74819 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 18:33:09.353298   74819 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:33:09.353379   74819 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:33:09.353457   74819 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:33:09.353554   74819 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:33:09.353641   74819 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:33:09.353812   74819 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:33:09.353936   74819 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:33:09.353991   74819 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:33:09.354088   74819 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:33:09.355847   74819 out.go:204]   - Booting up control plane ...
	I0717 18:33:09.355944   74819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:33:09.356045   74819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:33:09.356125   74819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:33:09.356223   74819 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:33:09.356411   74819 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:33:09.356468   74819 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:33:09.356539   74819 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:33:09.356729   74819 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:33:09.356816   74819 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:33:09.357002   74819 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:33:09.357093   74819 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:33:09.357280   74819 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:33:09.357382   74819 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:33:09.357647   74819 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:33:09.357760   74819 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:33:09.357955   74819 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:33:09.357965   74819 kubeadm.go:310] 
	I0717 18:33:09.358014   74819 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:33:09.358068   74819 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:33:09.358078   74819 kubeadm.go:310] 
	I0717 18:33:09.358127   74819 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:33:09.358165   74819 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:33:09.358303   74819 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:33:09.358314   74819 kubeadm.go:310] 
	I0717 18:33:09.358445   74819 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:33:09.358505   74819 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:33:09.358569   74819 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:33:09.358590   74819 kubeadm.go:310] 
	I0717 18:33:09.358706   74819 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:33:09.358802   74819 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:33:09.358812   74819 kubeadm.go:310] 
	I0717 18:33:09.358940   74819 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:33:09.359074   74819 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:33:09.359194   74819 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:33:09.359269   74819 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:33:09.359329   74819 kubeadm.go:310] 
	W0717 18:33:09.359417   74819 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-019549] and IPs [192.168.39.128 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-019549] and IPs [192.168.39.128 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-019549] and IPs [192.168.39.128 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-019549] and IPs [192.168.39.128 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 18:33:09.359466   74819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:33:09.836103   74819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:33:09.850765   74819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:33:09.860184   74819 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:33:09.860198   74819 kubeadm.go:157] found existing configuration files:
	
	I0717 18:33:09.860234   74819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:33:09.868881   74819 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:33:09.868920   74819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:33:09.877736   74819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:33:09.885826   74819 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:33:09.885871   74819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:33:09.894069   74819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:33:09.901994   74819 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:33:09.902032   74819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:33:09.910236   74819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:33:09.917849   74819 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:33:09.917884   74819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:33:09.926850   74819 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:33:10.119596   74819 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:35:06.295325   74819 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:35:06.295453   74819 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:35:06.297640   74819 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:35:06.297696   74819 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:35:06.297787   74819 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:35:06.297890   74819 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:35:06.297980   74819 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:35:06.298046   74819 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:35:06.299918   74819 out.go:204]   - Generating certificates and keys ...
	I0717 18:35:06.299993   74819 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:35:06.300047   74819 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:35:06.300127   74819 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:35:06.300201   74819 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:35:06.300304   74819 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:35:06.300385   74819 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:35:06.300439   74819 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:35:06.300492   74819 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:35:06.300570   74819 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:35:06.300640   74819 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:35:06.300673   74819 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:35:06.300720   74819 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:35:06.300762   74819 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:35:06.300809   74819 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:35:06.300863   74819 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:35:06.300916   74819 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:35:06.301032   74819 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:35:06.301180   74819 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:35:06.301254   74819 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:35:06.301348   74819 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:35:06.302821   74819 out.go:204]   - Booting up control plane ...
	I0717 18:35:06.302928   74819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:35:06.303015   74819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:35:06.303102   74819 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:35:06.303204   74819 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:35:06.303393   74819 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:35:06.303467   74819 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:35:06.303559   74819 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:35:06.303770   74819 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:35:06.303871   74819 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:35:06.304071   74819 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:35:06.304135   74819 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:35:06.304327   74819 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:35:06.304399   74819 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:35:06.304584   74819 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:35:06.304665   74819 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:35:06.304823   74819 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:35:06.304830   74819 kubeadm.go:310] 
	I0717 18:35:06.304863   74819 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:35:06.304896   74819 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:35:06.304902   74819 kubeadm.go:310] 
	I0717 18:35:06.304930   74819 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:35:06.304978   74819 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:35:06.305068   74819 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:35:06.305075   74819 kubeadm.go:310] 
	I0717 18:35:06.305157   74819 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:35:06.305214   74819 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:35:06.305256   74819 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:35:06.305268   74819 kubeadm.go:310] 
	I0717 18:35:06.305421   74819 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:35:06.305529   74819 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:35:06.305543   74819 kubeadm.go:310] 
	I0717 18:35:06.305631   74819 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:35:06.305701   74819 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:35:06.305784   74819 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:35:06.305859   74819 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:35:06.305894   74819 kubeadm.go:310] 
	I0717 18:35:06.305911   74819 kubeadm.go:394] duration metric: took 3m54.862527242s to StartCluster
	I0717 18:35:06.305956   74819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:35:06.306004   74819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:35:06.347369   74819 cri.go:89] found id: ""
	I0717 18:35:06.347400   74819 logs.go:276] 0 containers: []
	W0717 18:35:06.347410   74819 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:35:06.347421   74819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:35:06.347478   74819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:35:06.380733   74819 cri.go:89] found id: ""
	I0717 18:35:06.380754   74819 logs.go:276] 0 containers: []
	W0717 18:35:06.380761   74819 logs.go:278] No container was found matching "etcd"
	I0717 18:35:06.380769   74819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:35:06.380812   74819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:35:06.413506   74819 cri.go:89] found id: ""
	I0717 18:35:06.413532   74819 logs.go:276] 0 containers: []
	W0717 18:35:06.413540   74819 logs.go:278] No container was found matching "coredns"
	I0717 18:35:06.413556   74819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:35:06.413620   74819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:35:06.446206   74819 cri.go:89] found id: ""
	I0717 18:35:06.446234   74819 logs.go:276] 0 containers: []
	W0717 18:35:06.446240   74819 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:35:06.446248   74819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:35:06.446296   74819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:35:06.477644   74819 cri.go:89] found id: ""
	I0717 18:35:06.477668   74819 logs.go:276] 0 containers: []
	W0717 18:35:06.477675   74819 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:35:06.477680   74819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:35:06.477729   74819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:35:06.509588   74819 cri.go:89] found id: ""
	I0717 18:35:06.509620   74819 logs.go:276] 0 containers: []
	W0717 18:35:06.509630   74819 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:35:06.509638   74819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:35:06.509696   74819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:35:06.541410   74819 cri.go:89] found id: ""
	I0717 18:35:06.541434   74819 logs.go:276] 0 containers: []
	W0717 18:35:06.541443   74819 logs.go:278] No container was found matching "kindnet"
	I0717 18:35:06.541454   74819 logs.go:123] Gathering logs for kubelet ...
	I0717 18:35:06.541468   74819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:35:06.591073   74819 logs.go:123] Gathering logs for dmesg ...
	I0717 18:35:06.591108   74819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:35:06.604095   74819 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:35:06.604118   74819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:35:06.713393   74819 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:35:06.713426   74819 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:35:06.713482   74819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:35:06.804414   74819 logs.go:123] Gathering logs for container status ...
	I0717 18:35:06.804450   74819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 18:35:06.840985   74819 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 18:35:06.841035   74819 out.go:239] * 
	* 
	W0717 18:35:06.841094   74819 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:35:06.841125   74819 out.go:239] * 
	* 
	W0717 18:35:06.841978   74819 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:35:06.845882   74819 out.go:177] 
	W0717 18:35:06.847260   74819 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:35:06.847305   74819 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 18:35:06.847338   74819 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 18:35:06.848956   74819 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-019549 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549: exit status 6 (226.520256ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:35:07.117916   80050 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-019549" does not appear in /home/jenkins/minikube-integration/19283-14386/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-019549" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (267.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-527415 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-527415 --alsologtostderr -v=3: exit status 82 (2m0.490945976s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-527415"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:32:36.577905   78634 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:32:36.578177   78634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:32:36.578189   78634 out.go:304] Setting ErrFile to fd 2...
	I0717 18:32:36.578195   78634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:32:36.578461   78634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:32:36.578706   78634 out.go:298] Setting JSON to false
	I0717 18:32:36.578780   78634 mustload.go:65] Loading cluster: embed-certs-527415
	I0717 18:32:36.579091   78634 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:32:36.579156   78634 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/config.json ...
	I0717 18:32:36.579314   78634 mustload.go:65] Loading cluster: embed-certs-527415
	I0717 18:32:36.579425   78634 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:32:36.579448   78634 stop.go:39] StopHost: embed-certs-527415
	I0717 18:32:36.579853   78634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:32:36.579889   78634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:32:36.594873   78634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34881
	I0717 18:32:36.595346   78634 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:32:36.595921   78634 main.go:141] libmachine: Using API Version  1
	I0717 18:32:36.595944   78634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:32:36.596282   78634 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:32:36.598704   78634 out.go:177] * Stopping node "embed-certs-527415"  ...
	I0717 18:32:36.600122   78634 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 18:32:36.600154   78634 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:32:36.600376   78634 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 18:32:36.600396   78634 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:32:36.603256   78634 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:32:36.603657   78634 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:32:36.603684   78634 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:32:36.603785   78634 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:32:36.603955   78634 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:32:36.604128   78634 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:32:36.604280   78634 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:32:36.698620   78634 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 18:32:36.756115   78634 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 18:32:36.813127   78634 main.go:141] libmachine: Stopping "embed-certs-527415"...
	I0717 18:32:36.813166   78634 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:32:36.814952   78634 main.go:141] libmachine: (embed-certs-527415) Calling .Stop
	I0717 18:32:36.818501   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 0/120
	I0717 18:32:37.819965   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 1/120
	I0717 18:32:38.821982   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 2/120
	I0717 18:32:39.823745   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 3/120
	I0717 18:32:40.825301   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 4/120
	I0717 18:32:41.826928   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 5/120
	I0717 18:32:42.827964   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 6/120
	I0717 18:32:43.829319   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 7/120
	I0717 18:32:44.831172   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 8/120
	I0717 18:32:45.832581   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 9/120
	I0717 18:32:46.834774   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 10/120
	I0717 18:32:47.836072   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 11/120
	I0717 18:32:48.838303   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 12/120
	I0717 18:32:49.839798   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 13/120
	I0717 18:32:50.840974   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 14/120
	I0717 18:32:51.842944   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 15/120
	I0717 18:32:52.845234   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 16/120
	I0717 18:32:53.846725   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 17/120
	I0717 18:32:54.848581   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 18/120
	I0717 18:32:55.850037   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 19/120
	I0717 18:32:56.852131   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 20/120
	I0717 18:32:57.853784   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 21/120
	I0717 18:32:58.855665   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 22/120
	I0717 18:32:59.857229   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 23/120
	I0717 18:33:00.858804   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 24/120
	I0717 18:33:01.861101   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 25/120
	I0717 18:33:02.863514   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 26/120
	I0717 18:33:03.865032   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 27/120
	I0717 18:33:04.866526   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 28/120
	I0717 18:33:05.868068   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 29/120
	I0717 18:33:06.870313   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 30/120
	I0717 18:33:07.871699   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 31/120
	I0717 18:33:08.873003   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 32/120
	I0717 18:33:09.874534   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 33/120
	I0717 18:33:10.875758   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 34/120
	I0717 18:33:11.877793   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 35/120
	I0717 18:33:12.879271   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 36/120
	I0717 18:33:13.880832   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 37/120
	I0717 18:33:14.882126   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 38/120
	I0717 18:33:15.883519   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 39/120
	I0717 18:33:16.885724   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 40/120
	I0717 18:33:17.887177   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 41/120
	I0717 18:33:18.888803   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 42/120
	I0717 18:33:19.889870   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 43/120
	I0717 18:33:20.891564   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 44/120
	I0717 18:33:21.892974   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 45/120
	I0717 18:33:22.894408   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 46/120
	I0717 18:33:23.895810   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 47/120
	I0717 18:33:24.898143   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 48/120
	I0717 18:33:25.900350   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 49/120
	I0717 18:33:26.902497   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 50/120
	I0717 18:33:27.903838   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 51/120
	I0717 18:33:28.905359   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 52/120
	I0717 18:33:29.906817   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 53/120
	I0717 18:33:30.908123   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 54/120
	I0717 18:33:31.909628   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 55/120
	I0717 18:33:32.911819   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 56/120
	I0717 18:33:33.913180   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 57/120
	I0717 18:33:34.914887   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 58/120
	I0717 18:33:35.916375   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 59/120
	I0717 18:33:36.917690   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 60/120
	I0717 18:33:37.918872   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 61/120
	I0717 18:33:38.920729   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 62/120
	I0717 18:33:39.922041   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 63/120
	I0717 18:33:40.924143   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 64/120
	I0717 18:33:41.926378   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 65/120
	I0717 18:33:42.927771   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 66/120
	I0717 18:33:43.929321   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 67/120
	I0717 18:33:44.930552   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 68/120
	I0717 18:33:45.931959   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 69/120
	I0717 18:33:46.934293   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 70/120
	I0717 18:33:47.935477   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 71/120
	I0717 18:33:48.936748   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 72/120
	I0717 18:33:49.938211   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 73/120
	I0717 18:33:50.939688   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 74/120
	I0717 18:33:51.941606   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 75/120
	I0717 18:33:52.943243   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 76/120
	I0717 18:33:53.944585   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 77/120
	I0717 18:33:54.946104   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 78/120
	I0717 18:33:55.947390   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 79/120
	I0717 18:33:56.949696   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 80/120
	I0717 18:33:57.951367   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 81/120
	I0717 18:33:58.952833   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 82/120
	I0717 18:33:59.954070   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 83/120
	I0717 18:34:00.955579   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 84/120
	I0717 18:34:01.957390   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 85/120
	I0717 18:34:02.958691   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 86/120
	I0717 18:34:03.959960   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 87/120
	I0717 18:34:04.961587   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 88/120
	I0717 18:34:05.963508   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 89/120
	I0717 18:34:06.965652   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 90/120
	I0717 18:34:07.967510   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 91/120
	I0717 18:34:08.968761   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 92/120
	I0717 18:34:09.970196   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 93/120
	I0717 18:34:10.971922   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 94/120
	I0717 18:34:11.973816   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 95/120
	I0717 18:34:12.975617   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 96/120
	I0717 18:34:13.976907   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 97/120
	I0717 18:34:14.978564   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 98/120
	I0717 18:34:15.980636   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 99/120
	I0717 18:34:16.983017   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 100/120
	I0717 18:34:17.984642   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 101/120
	I0717 18:34:18.986087   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 102/120
	I0717 18:34:19.987729   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 103/120
	I0717 18:34:20.989234   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 104/120
	I0717 18:34:21.991221   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 105/120
	I0717 18:34:22.992738   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 106/120
	I0717 18:34:23.994210   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 107/120
	I0717 18:34:24.995683   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 108/120
	I0717 18:34:25.997019   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 109/120
	I0717 18:34:26.999073   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 110/120
	I0717 18:34:28.000534   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 111/120
	I0717 18:34:29.002226   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 112/120
	I0717 18:34:30.003598   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 113/120
	I0717 18:34:31.004981   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 114/120
	I0717 18:34:32.006945   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 115/120
	I0717 18:34:33.008307   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 116/120
	I0717 18:34:34.009932   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 117/120
	I0717 18:34:35.011581   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 118/120
	I0717 18:34:36.013240   78634 main.go:141] libmachine: (embed-certs-527415) Waiting for machine to stop 119/120
	I0717 18:34:37.014566   78634 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 18:34:37.014618   78634 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 18:34:37.016611   78634 out.go:177] 
	W0717 18:34:37.017785   78634 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 18:34:37.017803   78634 out.go:239] * 
	* 
	W0717 18:34:37.020540   78634 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:34:37.022682   78634 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-527415 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-527415 -n embed-certs-527415
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-527415 -n embed-certs-527415: exit status 3 (18.5608378s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:34:55.585277   79741 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.90:22: connect: no route to host
	E0717 18:34:55.585297   79741 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.90:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-527415" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-066175 --alsologtostderr -v=3
E0717 18:33:04.882905   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:33:10.003113   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:33:20.244188   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:33:21.395450   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 18:33:31.728824   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:33:40.725293   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:33:51.504384   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory
E0717 18:34:21.686164   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:34:26.523346   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:34:26.528629   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:34:26.538867   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:34:26.559189   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:34:26.599443   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:34:26.679780   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:34:26.840180   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:34:27.160411   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:34:27.801404   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:34:29.082554   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-066175 --alsologtostderr -v=3: exit status 82 (2m0.517931878s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-066175"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:33:02.801900   79293 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:33:02.802014   79293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:33:02.802024   79293 out.go:304] Setting ErrFile to fd 2...
	I0717 18:33:02.802030   79293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:33:02.802283   79293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:33:02.802566   79293 out.go:298] Setting JSON to false
	I0717 18:33:02.802660   79293 mustload.go:65] Loading cluster: no-preload-066175
	I0717 18:33:02.803124   79293 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:33:02.803220   79293 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/config.json ...
	I0717 18:33:02.803419   79293 mustload.go:65] Loading cluster: no-preload-066175
	I0717 18:33:02.803573   79293 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:33:02.803603   79293 stop.go:39] StopHost: no-preload-066175
	I0717 18:33:02.804155   79293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:33:02.804207   79293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:33:02.819137   79293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41217
	I0717 18:33:02.819581   79293 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:33:02.820285   79293 main.go:141] libmachine: Using API Version  1
	I0717 18:33:02.820311   79293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:33:02.820651   79293 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:33:02.822826   79293 out.go:177] * Stopping node "no-preload-066175"  ...
	I0717 18:33:02.823907   79293 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 18:33:02.823938   79293 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:33:02.824130   79293 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 18:33:02.824150   79293 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:33:02.826937   79293 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:33:02.827316   79293 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:31:18 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:33:02.827345   79293 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:33:02.827479   79293 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:33:02.827645   79293 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:33:02.827801   79293 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:33:02.827952   79293 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:33:02.935831   79293 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 18:33:02.997397   79293 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 18:33:03.061746   79293 main.go:141] libmachine: Stopping "no-preload-066175"...
	I0717 18:33:03.061796   79293 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:33:03.063534   79293 main.go:141] libmachine: (no-preload-066175) Calling .Stop
	I0717 18:33:03.067728   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 0/120
	I0717 18:33:04.069330   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 1/120
	I0717 18:33:05.070949   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 2/120
	I0717 18:33:06.072568   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 3/120
	I0717 18:33:07.074005   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 4/120
	I0717 18:33:08.076187   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 5/120
	I0717 18:33:09.077626   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 6/120
	I0717 18:33:10.079227   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 7/120
	I0717 18:33:11.081060   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 8/120
	I0717 18:33:12.082478   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 9/120
	I0717 18:33:13.084814   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 10/120
	I0717 18:33:14.085961   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 11/120
	I0717 18:33:15.087247   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 12/120
	I0717 18:33:16.088456   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 13/120
	I0717 18:33:17.089833   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 14/120
	I0717 18:33:18.091819   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 15/120
	I0717 18:33:19.093149   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 16/120
	I0717 18:33:20.095286   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 17/120
	I0717 18:33:21.096666   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 18/120
	I0717 18:33:22.098686   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 19/120
	I0717 18:33:23.099985   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 20/120
	I0717 18:33:24.101491   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 21/120
	I0717 18:33:25.103317   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 22/120
	I0717 18:33:26.105012   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 23/120
	I0717 18:33:27.106324   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 24/120
	I0717 18:33:28.108438   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 25/120
	I0717 18:33:29.109679   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 26/120
	I0717 18:33:30.111625   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 27/120
	I0717 18:33:31.112968   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 28/120
	I0717 18:33:32.114544   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 29/120
	I0717 18:33:33.117033   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 30/120
	I0717 18:33:34.118149   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 31/120
	I0717 18:33:35.119643   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 32/120
	I0717 18:33:36.121068   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 33/120
	I0717 18:33:37.122392   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 34/120
	I0717 18:33:38.124473   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 35/120
	I0717 18:33:39.125980   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 36/120
	I0717 18:33:40.127319   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 37/120
	I0717 18:33:41.128647   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 38/120
	I0717 18:33:42.130060   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 39/120
	I0717 18:33:43.132136   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 40/120
	I0717 18:33:44.133852   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 41/120
	I0717 18:33:45.135971   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 42/120
	I0717 18:33:46.137492   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 43/120
	I0717 18:33:47.138959   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 44/120
	I0717 18:33:48.140433   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 45/120
	I0717 18:33:49.141889   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 46/120
	I0717 18:33:50.143243   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 47/120
	I0717 18:33:51.144485   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 48/120
	I0717 18:33:52.145898   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 49/120
	I0717 18:33:53.147535   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 50/120
	I0717 18:33:54.148767   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 51/120
	I0717 18:33:55.150057   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 52/120
	I0717 18:33:56.151381   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 53/120
	I0717 18:33:57.152701   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 54/120
	I0717 18:33:58.154448   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 55/120
	I0717 18:33:59.155837   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 56/120
	I0717 18:34:00.157156   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 57/120
	I0717 18:34:01.158639   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 58/120
	I0717 18:34:02.160138   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 59/120
	I0717 18:34:03.161926   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 60/120
	I0717 18:34:04.163292   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 61/120
	I0717 18:34:05.165190   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 62/120
	I0717 18:34:06.167276   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 63/120
	I0717 18:34:07.168718   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 64/120
	I0717 18:34:08.170371   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 65/120
	I0717 18:34:09.171669   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 66/120
	I0717 18:34:10.173201   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 67/120
	I0717 18:34:11.175780   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 68/120
	I0717 18:34:12.177272   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 69/120
	I0717 18:34:13.179536   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 70/120
	I0717 18:34:14.181674   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 71/120
	I0717 18:34:15.183114   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 72/120
	I0717 18:34:16.184654   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 73/120
	I0717 18:34:17.186160   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 74/120
	I0717 18:34:18.188338   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 75/120
	I0717 18:34:19.189804   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 76/120
	I0717 18:34:20.191316   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 77/120
	I0717 18:34:21.193133   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 78/120
	I0717 18:34:22.194687   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 79/120
	I0717 18:34:23.197047   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 80/120
	I0717 18:34:24.198488   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 81/120
	I0717 18:34:25.200202   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 82/120
	I0717 18:34:26.201649   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 83/120
	I0717 18:34:27.203603   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 84/120
	I0717 18:34:28.205641   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 85/120
	I0717 18:34:29.207604   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 86/120
	I0717 18:34:30.208964   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 87/120
	I0717 18:34:31.210406   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 88/120
	I0717 18:34:32.211923   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 89/120
	I0717 18:34:33.213948   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 90/120
	I0717 18:34:34.215653   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 91/120
	I0717 18:34:35.216918   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 92/120
	I0717 18:34:36.218229   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 93/120
	I0717 18:34:37.219509   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 94/120
	I0717 18:34:38.221524   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 95/120
	I0717 18:34:39.223353   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 96/120
	I0717 18:34:40.224762   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 97/120
	I0717 18:34:41.226020   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 98/120
	I0717 18:34:42.228572   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 99/120
	I0717 18:34:43.230947   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 100/120
	I0717 18:34:44.232321   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 101/120
	I0717 18:34:45.233750   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 102/120
	I0717 18:34:46.235320   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 103/120
	I0717 18:34:47.236652   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 104/120
	I0717 18:34:48.238681   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 105/120
	I0717 18:34:49.240160   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 106/120
	I0717 18:34:50.241641   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 107/120
	I0717 18:34:51.243122   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 108/120
	I0717 18:34:52.244533   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 109/120
	I0717 18:34:53.246887   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 110/120
	I0717 18:34:54.248398   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 111/120
	I0717 18:34:55.250005   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 112/120
	I0717 18:34:56.251586   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 113/120
	I0717 18:34:57.253179   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 114/120
	I0717 18:34:58.255516   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 115/120
	I0717 18:34:59.257111   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 116/120
	I0717 18:35:00.258764   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 117/120
	I0717 18:35:01.260213   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 118/120
	I0717 18:35:02.261790   79293 main.go:141] libmachine: (no-preload-066175) Waiting for machine to stop 119/120
	I0717 18:35:03.262752   79293 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 18:35:03.262806   79293 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 18:35:03.264857   79293 out.go:177] 
	W0717 18:35:03.266360   79293 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 18:35:03.266384   79293 out.go:239] * 
	* 
	W0717 18:35:03.269291   79293 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:35:03.270757   79293 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-066175 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-066175 -n no-preload-066175
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-066175 -n no-preload-066175: exit status 3 (18.42464561s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:35:21.697318   79970 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.216:22: connect: no route to host
	E0717 18:35:21.697338   79970 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.216:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-066175" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-022930 --alsologtostderr -v=3
E0717 18:34:47.005042   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:34:53.649776   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-022930 --alsologtostderr -v=3: exit status 82 (2m0.499343949s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-022930"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:34:43.006271   79837 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:34:43.006390   79837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:34:43.006398   79837 out.go:304] Setting ErrFile to fd 2...
	I0717 18:34:43.006402   79837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:34:43.006555   79837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:34:43.006799   79837 out.go:298] Setting JSON to false
	I0717 18:34:43.006874   79837 mustload.go:65] Loading cluster: default-k8s-diff-port-022930
	I0717 18:34:43.007186   79837 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:34:43.007266   79837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/config.json ...
	I0717 18:34:43.007480   79837 mustload.go:65] Loading cluster: default-k8s-diff-port-022930
	I0717 18:34:43.007628   79837 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:34:43.007673   79837 stop.go:39] StopHost: default-k8s-diff-port-022930
	I0717 18:34:43.008067   79837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:34:43.008114   79837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:34:43.023355   79837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38511
	I0717 18:34:43.023850   79837 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:34:43.024499   79837 main.go:141] libmachine: Using API Version  1
	I0717 18:34:43.024524   79837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:34:43.024922   79837 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:34:43.026905   79837 out.go:177] * Stopping node "default-k8s-diff-port-022930"  ...
	I0717 18:34:43.027957   79837 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0717 18:34:43.027984   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:34:43.028193   79837 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0717 18:34:43.028219   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:34:43.030959   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:34:43.031415   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:33:09 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:34:43.031444   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:34:43.031639   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:34:43.031832   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:34:43.031985   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:34:43.032133   79837 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:34:43.143102   79837 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0717 18:34:43.207581   79837 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0717 18:34:43.255753   79837 main.go:141] libmachine: Stopping "default-k8s-diff-port-022930"...
	I0717 18:34:43.255800   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:34:43.257353   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Stop
	I0717 18:34:43.260820   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 0/120
	I0717 18:34:44.262133   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 1/120
	I0717 18:34:45.263367   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 2/120
	I0717 18:34:46.264678   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 3/120
	I0717 18:34:47.265942   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 4/120
	I0717 18:34:48.267871   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 5/120
	I0717 18:34:49.269167   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 6/120
	I0717 18:34:50.271351   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 7/120
	I0717 18:34:51.272551   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 8/120
	I0717 18:34:52.273838   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 9/120
	I0717 18:34:53.275125   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 10/120
	I0717 18:34:54.276644   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 11/120
	I0717 18:34:55.277952   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 12/120
	I0717 18:34:56.279276   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 13/120
	I0717 18:34:57.280743   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 14/120
	I0717 18:34:58.282798   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 15/120
	I0717 18:34:59.284062   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 16/120
	I0717 18:35:00.285407   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 17/120
	I0717 18:35:01.287146   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 18/120
	I0717 18:35:02.288481   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 19/120
	I0717 18:35:03.290672   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 20/120
	I0717 18:35:04.292398   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 21/120
	I0717 18:35:05.294619   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 22/120
	I0717 18:35:06.296227   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 23/120
	I0717 18:35:07.297922   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 24/120
	I0717 18:35:08.299992   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 25/120
	I0717 18:35:09.301613   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 26/120
	I0717 18:35:10.303867   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 27/120
	I0717 18:35:11.305454   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 28/120
	I0717 18:35:12.307575   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 29/120
	I0717 18:35:13.309853   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 30/120
	I0717 18:35:14.311283   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 31/120
	I0717 18:35:15.312730   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 32/120
	I0717 18:35:16.314409   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 33/120
	I0717 18:35:17.315884   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 34/120
	I0717 18:35:18.318451   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 35/120
	I0717 18:35:19.319909   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 36/120
	I0717 18:35:20.321586   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 37/120
	I0717 18:35:21.322987   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 38/120
	I0717 18:35:22.324421   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 39/120
	I0717 18:35:23.326818   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 40/120
	I0717 18:35:24.328128   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 41/120
	I0717 18:35:25.329368   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 42/120
	I0717 18:35:26.330875   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 43/120
	I0717 18:35:27.332526   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 44/120
	I0717 18:35:28.334565   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 45/120
	I0717 18:35:29.336231   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 46/120
	I0717 18:35:30.337729   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 47/120
	I0717 18:35:31.339347   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 48/120
	I0717 18:35:32.340643   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 49/120
	I0717 18:35:33.342984   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 50/120
	I0717 18:35:34.344302   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 51/120
	I0717 18:35:35.346562   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 52/120
	I0717 18:35:36.348220   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 53/120
	I0717 18:35:37.349676   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 54/120
	I0717 18:35:38.351286   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 55/120
	I0717 18:35:39.353041   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 56/120
	I0717 18:35:40.354484   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 57/120
	I0717 18:35:41.356355   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 58/120
	I0717 18:35:42.357838   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 59/120
	I0717 18:35:43.359309   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 60/120
	I0717 18:35:44.360671   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 61/120
	I0717 18:35:45.362352   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 62/120
	I0717 18:35:46.363781   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 63/120
	I0717 18:35:47.365223   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 64/120
	I0717 18:35:48.367262   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 65/120
	I0717 18:35:49.368774   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 66/120
	I0717 18:35:50.370080   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 67/120
	I0717 18:35:51.371820   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 68/120
	I0717 18:35:52.373399   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 69/120
	I0717 18:35:53.375772   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 70/120
	I0717 18:35:54.377348   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 71/120
	I0717 18:35:55.378728   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 72/120
	I0717 18:35:56.380388   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 73/120
	I0717 18:35:57.381764   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 74/120
	I0717 18:35:58.383740   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 75/120
	I0717 18:35:59.385654   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 76/120
	I0717 18:36:00.386908   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 77/120
	I0717 18:36:01.388371   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 78/120
	I0717 18:36:02.389670   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 79/120
	I0717 18:36:03.392060   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 80/120
	I0717 18:36:04.393602   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 81/120
	I0717 18:36:05.395128   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 82/120
	I0717 18:36:06.396770   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 83/120
	I0717 18:36:07.398471   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 84/120
	I0717 18:36:08.400800   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 85/120
	I0717 18:36:09.402254   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 86/120
	I0717 18:36:10.403879   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 87/120
	I0717 18:36:11.405312   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 88/120
	I0717 18:36:12.407043   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 89/120
	I0717 18:36:13.409513   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 90/120
	I0717 18:36:14.410946   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 91/120
	I0717 18:36:15.412337   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 92/120
	I0717 18:36:16.413779   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 93/120
	I0717 18:36:17.415183   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 94/120
	I0717 18:36:18.417154   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 95/120
	I0717 18:36:19.418656   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 96/120
	I0717 18:36:20.419903   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 97/120
	I0717 18:36:21.421477   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 98/120
	I0717 18:36:22.423218   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 99/120
	I0717 18:36:23.425541   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 100/120
	I0717 18:36:24.427008   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 101/120
	I0717 18:36:25.428411   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 102/120
	I0717 18:36:26.429857   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 103/120
	I0717 18:36:27.431328   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 104/120
	I0717 18:36:28.433440   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 105/120
	I0717 18:36:29.434783   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 106/120
	I0717 18:36:30.436030   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 107/120
	I0717 18:36:31.437484   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 108/120
	I0717 18:36:32.439305   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 109/120
	I0717 18:36:33.441507   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 110/120
	I0717 18:36:34.442955   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 111/120
	I0717 18:36:35.444266   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 112/120
	I0717 18:36:36.445777   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 113/120
	I0717 18:36:37.447040   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 114/120
	I0717 18:36:38.449091   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 115/120
	I0717 18:36:39.450548   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 116/120
	I0717 18:36:40.451960   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 117/120
	I0717 18:36:41.453255   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 118/120
	I0717 18:36:42.454265   79837 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for machine to stop 119/120
	I0717 18:36:43.455709   79837 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0717 18:36:43.455757   79837 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0717 18:36:43.457830   79837 out.go:177] 
	W0717 18:36:43.459209   79837 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0717 18:36:43.459232   79837 out.go:239] * 
	* 
	W0717 18:36:43.462122   79837 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:36:43.464130   79837 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-022930 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-022930 -n default-k8s-diff-port-022930
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-022930 -n default-k8s-diff-port-022930: exit status 3 (18.58326826s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:37:02.049266   80760 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host
	E0717 18:37:02.049289   80760 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-022930" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-527415 -n embed-certs-527415
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-527415 -n embed-certs-527415: exit status 3 (3.167521379s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:34:58.753279   79904 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.90:22: connect: no route to host
	E0717 18:34:58.753306   79904 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.90:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-527415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-527415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153482171s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.90:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-527415 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-527415 -n embed-certs-527415
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-527415 -n embed-certs-527415: exit status 3 (3.062224985s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:35:07.969394   80001 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.90:22: connect: no route to host
	E0717 18:35:07.969432   80001 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.90:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-527415" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-019549 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-019549 create -f testdata/busybox.yaml: exit status 1 (43.175299ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-019549" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-019549 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549: exit status 6 (207.051748ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:35:07.369664   80088 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-019549" does not appear in /home/jenkins/minikube-integration/19283-14386/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-019549" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549
E0717 18:35:07.485778   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549: exit status 6 (212.147142ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:35:07.581645   80120 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-019549" does not appear in /home/jenkins/minikube-integration/19283-14386/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-019549" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-019549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-019549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m34.705963188s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-019549 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-019549 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-019549 describe deploy/metrics-server -n kube-system: exit status 1 (42.615212ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-019549" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-019549 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549: exit status 6 (216.991473ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:36:42.546093   80712 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-019549" does not appear in /home/jenkins/minikube-integration/19283-14386/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-019549" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-066175 -n no-preload-066175
E0717 18:35:21.839216   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:35:22.329703   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:35:24.399394   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-066175 -n no-preload-066175: exit status 3 (3.167721059s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:35:24.865359   80290 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.216:22: connect: no route to host
	E0717 18:35:24.865379   80290 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.216:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-066175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0717 18:35:29.519635   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-066175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152770921s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.216:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-066175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-066175 -n no-preload-066175
E0717 18:35:32.570783   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-066175 -n no-preload-066175: exit status 3 (3.062877654s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:35:34.081392   80371 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.216:22: connect: no route to host
	E0717 18:35:34.081420   80371 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.216:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-066175" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (709.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-019549 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-019549 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m45.675563089s)

                                                
                                                
-- stdout --
	* [old-k8s-version-019549] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-019549" primary control-plane node in "old-k8s-version-019549" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-019549" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:36:45.049189   80857 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:36:45.049305   80857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:36:45.049314   80857 out.go:304] Setting ErrFile to fd 2...
	I0717 18:36:45.049318   80857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:36:45.049493   80857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:36:45.050001   80857 out.go:298] Setting JSON to false
	I0717 18:36:45.050910   80857 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8348,"bootTime":1721233057,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:36:45.050960   80857 start.go:139] virtualization: kvm guest
	I0717 18:36:45.052962   80857 out.go:177] * [old-k8s-version-019549] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:36:45.054544   80857 notify.go:220] Checking for updates...
	I0717 18:36:45.054609   80857 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:36:45.055981   80857 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:36:45.057926   80857 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:36:45.059476   80857 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:36:45.060930   80857 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:36:45.062227   80857 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:36:45.063847   80857 config.go:182] Loaded profile config "old-k8s-version-019549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 18:36:45.064255   80857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:36:45.064288   80857 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:36:45.078547   80857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36377
	I0717 18:36:45.078888   80857 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:36:45.079397   80857 main.go:141] libmachine: Using API Version  1
	I0717 18:36:45.079416   80857 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:36:45.079721   80857 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:36:45.079900   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:36:45.081575   80857 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 18:36:45.082897   80857 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:36:45.083174   80857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:36:45.083204   80857 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:36:45.097235   80857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35611
	I0717 18:36:45.097612   80857 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:36:45.098025   80857 main.go:141] libmachine: Using API Version  1
	I0717 18:36:45.098044   80857 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:36:45.098364   80857 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:36:45.098557   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:36:45.132379   80857 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:36:45.133910   80857 start.go:297] selected driver: kvm2
	I0717 18:36:45.133932   80857 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:36:45.134029   80857 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:36:45.134706   80857 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:36:45.134780   80857 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:36:45.149260   80857 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:36:45.149614   80857 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:36:45.149668   80857 cni.go:84] Creating CNI manager for ""
	I0717 18:36:45.149681   80857 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:36:45.149723   80857 start.go:340] cluster config:
	{Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:36:45.149812   80857 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:36:45.152154   80857 out.go:177] * Starting "old-k8s-version-019549" primary control-plane node in "old-k8s-version-019549" cluster
	I0717 18:36:45.153535   80857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:36:45.153581   80857 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 18:36:45.153588   80857 cache.go:56] Caching tarball of preloaded images
	I0717 18:36:45.153648   80857 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:36:45.153658   80857 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0717 18:36:45.153745   80857 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/config.json ...
	I0717 18:36:45.153914   80857 start.go:360] acquireMachinesLock for old-k8s-version-019549: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:40:02.249487   80857 start.go:364] duration metric: took 3m17.095542929s to acquireMachinesLock for "old-k8s-version-019549"
	I0717 18:40:02.249548   80857 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:02.249556   80857 fix.go:54] fixHost starting: 
	I0717 18:40:02.249946   80857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:02.249976   80857 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:02.269365   80857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0717 18:40:02.269715   80857 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:02.270182   80857 main.go:141] libmachine: Using API Version  1
	I0717 18:40:02.270205   80857 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:02.270534   80857 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:02.270738   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:02.270875   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetState
	I0717 18:40:02.272408   80857 fix.go:112] recreateIfNeeded on old-k8s-version-019549: state=Stopped err=<nil>
	I0717 18:40:02.272443   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	W0717 18:40:02.272597   80857 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:02.274702   80857 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-019549" ...
	I0717 18:40:02.275985   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .Start
	I0717 18:40:02.276143   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring networks are active...
	I0717 18:40:02.276898   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network default is active
	I0717 18:40:02.277333   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network mk-old-k8s-version-019549 is active
	I0717 18:40:02.277796   80857 main.go:141] libmachine: (old-k8s-version-019549) Getting domain xml...
	I0717 18:40:02.278481   80857 main.go:141] libmachine: (old-k8s-version-019549) Creating domain...
	I0717 18:40:03.571325   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting to get IP...
	I0717 18:40:03.572359   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.572836   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.572968   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.572816   81751 retry.go:31] will retry after 301.991284ms: waiting for machine to come up
	I0717 18:40:03.876263   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.876688   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.876715   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.876637   81751 retry.go:31] will retry after 286.461163ms: waiting for machine to come up
	I0717 18:40:04.165366   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.165873   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.165902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.165811   81751 retry.go:31] will retry after 383.479108ms: waiting for machine to come up
	I0717 18:40:04.551152   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.551615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.551650   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.551589   81751 retry.go:31] will retry after 429.076714ms: waiting for machine to come up
	I0717 18:40:04.982157   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.982517   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.982545   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.982470   81751 retry.go:31] will retry after 553.684035ms: waiting for machine to come up
	I0717 18:40:05.538229   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:05.538753   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:05.538777   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:05.538702   81751 retry.go:31] will retry after 747.130907ms: waiting for machine to come up
	I0717 18:40:06.287146   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:06.287626   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:06.287665   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:06.287581   81751 retry.go:31] will retry after 1.171580264s: waiting for machine to come up
	I0717 18:40:07.461393   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:07.462015   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:07.462046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:07.461963   81751 retry.go:31] will retry after 1.199265198s: waiting for machine to come up
	I0717 18:40:08.663340   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:08.663789   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:08.663815   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:08.663745   81751 retry.go:31] will retry after 1.621895351s: waiting for machine to come up
	I0717 18:40:10.287596   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:10.288019   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:10.288046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:10.287964   81751 retry.go:31] will retry after 1.748504204s: waiting for machine to come up
	I0717 18:40:12.038137   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:12.038582   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:12.038615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:12.038532   81751 retry.go:31] will retry after 2.477996004s: waiting for machine to come up
	I0717 18:40:14.517788   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:14.518175   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:14.518203   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:14.518123   81751 retry.go:31] will retry after 3.29313184s: waiting for machine to come up
	I0717 18:40:17.813673   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814213   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has current primary IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814242   80857 main.go:141] libmachine: (old-k8s-version-019549) Found IP for machine: 192.168.39.128
	I0717 18:40:17.814277   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserving static IP address...
	I0717 18:40:17.814720   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserved static IP address: 192.168.39.128
	I0717 18:40:17.814738   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting for SSH to be available...
	I0717 18:40:17.814762   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.814783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | skip adding static IP to network mk-old-k8s-version-019549 - found existing host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"}
	I0717 18:40:17.814796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Getting to WaitForSSH function...
	I0717 18:40:17.817314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817714   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.817743   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH client type: external
	I0717 18:40:17.817944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa (-rw-------)
	I0717 18:40:17.817971   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:17.817984   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | About to run SSH command:
	I0717 18:40:17.818000   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | exit 0
	I0717 18:40:17.945902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:17.946262   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetConfigRaw
	I0717 18:40:17.946907   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:17.949757   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950158   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.950178   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950474   80857 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/config.json ...
	I0717 18:40:17.950706   80857 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:17.950728   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:17.950941   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:17.953738   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954141   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.954184   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954282   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:17.954456   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954617   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954790   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:17.954957   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:17.955121   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:17.955131   80857 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:18.061082   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:18.061113   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061405   80857 buildroot.go:166] provisioning hostname "old-k8s-version-019549"
	I0717 18:40:18.061432   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061685   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.064855   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.065348   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065537   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.065777   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.065929   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.066118   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.066329   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.066547   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.066564   80857 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-019549 && echo "old-k8s-version-019549" | sudo tee /etc/hostname
	I0717 18:40:18.191467   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-019549
	
	I0717 18:40:18.191517   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.194917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195455   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.195502   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195714   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.195908   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196105   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196288   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.196483   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.196708   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.196731   80857 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-019549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-019549/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-019549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:18.315020   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:18.315047   80857 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:18.315065   80857 buildroot.go:174] setting up certificates
	I0717 18:40:18.315078   80857 provision.go:84] configureAuth start
	I0717 18:40:18.315090   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.315358   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:18.318342   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.318796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.318826   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.319078   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.321562   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.321914   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.321944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.322125   80857 provision.go:143] copyHostCerts
	I0717 18:40:18.322208   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:18.322226   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:18.322309   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:18.322443   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:18.322457   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:18.322492   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:18.322579   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:18.322591   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:18.322621   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:18.322727   80857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-019549 san=[127.0.0.1 192.168.39.128 localhost minikube old-k8s-version-019549]
	I0717 18:40:18.397216   80857 provision.go:177] copyRemoteCerts
	I0717 18:40:18.397266   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:18.397301   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.399887   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400237   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.400286   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400531   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.400732   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.400880   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.401017   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.490677   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:18.518392   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 18:40:18.543930   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:18.567339   80857 provision.go:87] duration metric: took 252.250106ms to configureAuth
	I0717 18:40:18.567360   80857 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:18.567539   80857 config.go:182] Loaded profile config "old-k8s-version-019549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 18:40:18.567610   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.570373   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.570809   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570943   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.571140   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571281   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.571624   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.571841   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.571862   80857 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:18.845725   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:18.845752   80857 machine.go:97] duration metric: took 895.03234ms to provisionDockerMachine
	I0717 18:40:18.845765   80857 start.go:293] postStartSetup for "old-k8s-version-019549" (driver="kvm2")
	I0717 18:40:18.845778   80857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:18.845828   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:18.846158   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:18.846192   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.848760   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849264   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.849293   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.849649   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.849843   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.850007   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.938026   80857 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:18.943223   80857 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:18.943254   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:18.943317   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:18.943417   80857 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:18.943509   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:18.954887   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:18.976980   80857 start.go:296] duration metric: took 131.200877ms for postStartSetup
	I0717 18:40:18.977022   80857 fix.go:56] duration metric: took 16.727466541s for fixHost
	I0717 18:40:18.977041   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.980020   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980384   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.980417   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980533   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.980723   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.980903   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.981059   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.981207   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.981406   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.981418   80857 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0717 18:40:19.093409   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241619.063415252
	
	I0717 18:40:19.093433   80857 fix.go:216] guest clock: 1721241619.063415252
	I0717 18:40:19.093443   80857 fix.go:229] Guest: 2024-07-17 18:40:19.063415252 +0000 UTC Remote: 2024-07-17 18:40:18.97702579 +0000 UTC m=+213.960604949 (delta=86.389462ms)
	I0717 18:40:19.093494   80857 fix.go:200] guest clock delta is within tolerance: 86.389462ms
	I0717 18:40:19.093506   80857 start.go:83] releasing machines lock for "old-k8s-version-019549", held for 16.843984035s
	I0717 18:40:19.093543   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.093842   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:19.096443   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.096817   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.096848   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.097035   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097579   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097769   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097859   80857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:19.097915   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.098007   80857 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:19.098031   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.100775   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101108   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101160   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101185   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101412   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101595   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.101606   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101637   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101718   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.101789   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101853   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.101975   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.102092   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.102212   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.218596   80857 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:19.225675   80857 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:19.371453   80857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:19.381365   80857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:19.381438   80857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:19.397504   80857 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:19.397530   80857 start.go:495] detecting cgroup driver to use...
	I0717 18:40:19.397597   80857 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:19.412150   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:19.425495   80857 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:19.425578   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:19.438662   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:19.451953   80857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:19.578702   80857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:19.733328   80857 docker.go:233] disabling docker service ...
	I0717 18:40:19.733411   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:19.753615   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:19.774057   80857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:19.933901   80857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:20.049914   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:20.063500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:20.082560   80857 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 18:40:20.082611   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.092857   80857 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:20.092912   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.103283   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.112612   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.122671   80857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:20.132892   80857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:20.145445   80857 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:20.145501   80857 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:20.158958   80857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:20.168377   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:20.307224   80857 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:20.453407   80857 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:20.453490   80857 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:20.458007   80857 start.go:563] Will wait 60s for crictl version
	I0717 18:40:20.458062   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:20.461420   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:20.507358   80857 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:20.507426   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.542812   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.577280   80857 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 18:40:20.578679   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:20.581569   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.581933   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:20.581961   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.582197   80857 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:20.586047   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:20.598137   80857 kubeadm.go:883] updating cluster {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:20.598284   80857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:40:20.598355   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:20.646681   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:20.646757   80857 ssh_runner.go:195] Run: which lz4
	I0717 18:40:20.650691   80857 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 18:40:20.654703   80857 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:20.654730   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 18:40:22.163706   80857 crio.go:462] duration metric: took 1.513040695s to copy over tarball
	I0717 18:40:22.163783   80857 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:40:25.307875   80857 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.144060111s)
	I0717 18:40:25.307903   80857 crio.go:469] duration metric: took 3.144169984s to extract the tarball
	I0717 18:40:25.307914   80857 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:40:25.354436   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:25.404799   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:25.404827   80857 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:40:25.404884   80857 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.404936   80857 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 18:40:25.404908   80857 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.404952   80857 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.404998   80857 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.405010   80857 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.406661   80857 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.406667   80857 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 18:40:25.406690   80857 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.407119   80857 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.619950   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 18:40:25.635075   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.641561   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.647362   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.648054   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.649684   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.664183   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.709163   80857 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 18:40:25.709227   80857 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 18:40:25.709275   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.760931   80857 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 18:40:25.760994   80857 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.761042   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.779324   80857 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 18:40:25.779378   80857 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.779429   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799052   80857 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 18:40:25.799097   80857 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.799106   80857 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 18:40:25.799131   80857 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 18:40:25.799190   80857 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.799233   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799136   80857 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.799148   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799298   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.806973   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 18:40:25.807041   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.807066   80857 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 18:40:25.807095   80857 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.807126   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.807137   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.807237   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.811025   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.811114   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.935792   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 18:40:25.935853   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 18:40:25.935863   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 18:40:25.935934   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.935973   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 18:40:25.935996   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 18:40:25.940351   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 18:40:25.970107   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 18:40:26.231894   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:26.372230   80857 cache_images.go:92] duration metric: took 967.383323ms to LoadCachedImages
	W0717 18:40:26.372327   80857 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0717 18:40:26.372346   80857 kubeadm.go:934] updating node { 192.168.39.128 8443 v1.20.0 crio true true} ...
	I0717 18:40:26.372517   80857 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-019549 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:26.372613   80857 ssh_runner.go:195] Run: crio config
	I0717 18:40:26.416155   80857 cni.go:84] Creating CNI manager for ""
	I0717 18:40:26.416181   80857 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:26.416196   80857 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:26.416229   80857 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-019549 NodeName:old-k8s-version-019549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 18:40:26.416526   80857 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-019549"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:26.416595   80857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 18:40:26.426941   80857 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:26.427006   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:26.437810   80857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 18:40:26.460046   80857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:40:26.482521   80857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 18:40:26.502536   80857 ssh_runner.go:195] Run: grep 192.168.39.128	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:26.506513   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:26.520895   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:26.648931   80857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:26.665278   80857 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549 for IP: 192.168.39.128
	I0717 18:40:26.665300   80857 certs.go:194] generating shared ca certs ...
	I0717 18:40:26.665329   80857 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:26.665508   80857 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:26.665561   80857 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:26.665574   80857 certs.go:256] generating profile certs ...
	I0717 18:40:26.665693   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/client.key
	I0717 18:40:26.665780   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key.9c9b0a7e
	I0717 18:40:26.665836   80857 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key
	I0717 18:40:26.665998   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:26.666049   80857 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:26.666063   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:26.666095   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:26.666128   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:26.666167   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:26.666225   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:26.667047   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:26.713984   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:26.742617   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:26.770441   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:26.795098   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 18:40:26.825038   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:26.861300   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:26.901664   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:40:26.926357   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:26.948986   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:26.973248   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:26.994642   80857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:27.010158   80857 ssh_runner.go:195] Run: openssl version
	I0717 18:40:27.015861   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:27.026221   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030496   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030567   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.035862   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:27.046312   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:27.057117   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061775   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061824   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.067535   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:27.079022   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:27.090009   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094688   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094768   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.100404   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:27.110653   80857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:27.115117   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:27.120633   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:27.126070   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:27.131500   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:27.137035   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:27.142426   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:27.147638   80857 kubeadm.go:392] StartCluster: {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:27.147756   80857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:27.147816   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.187433   80857 cri.go:89] found id: ""
	I0717 18:40:27.187498   80857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:27.197001   80857 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:27.197020   80857 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:27.197070   80857 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:27.206758   80857 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:27.207822   80857 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-019549" does not appear in /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:40:27.208505   80857 kubeconfig.go:62] /home/jenkins/minikube-integration/19283-14386/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-019549" cluster setting kubeconfig missing "old-k8s-version-019549" context setting]
	I0717 18:40:27.209497   80857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:27.212786   80857 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:27.222612   80857 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.128
	I0717 18:40:27.222649   80857 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:27.222663   80857 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:27.222721   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.268127   80857 cri.go:89] found id: ""
	I0717 18:40:27.268205   80857 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:27.284334   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:27.293669   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:27.293691   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:27.293743   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:40:27.305348   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:27.305437   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:27.317749   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:40:27.328481   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:27.328547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:27.337574   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.346242   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:27.346299   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.354946   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:40:27.363296   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:27.363350   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:27.371925   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:27.384020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:27.571539   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:28.767574   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.19599736s)
	I0717 18:40:28.767612   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.011512   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.151980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.258796   80857 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:29.258886   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:29.759072   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.259921   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.758948   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.258967   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.759872   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.259187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.759299   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.259080   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.759583   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.259740   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.759068   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:35.259643   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:35.759432   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.259818   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.759627   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.259968   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.758933   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.259980   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.759776   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.259988   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:40.259910   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:40.759917   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.259718   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.759839   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.259129   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.759772   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.259989   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.759724   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.258978   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.759594   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:45.259185   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:45.759765   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.259009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.759131   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.259477   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.759386   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.259977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.759374   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.259744   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.759440   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.258977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.259867   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.759826   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.259016   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.759708   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.259589   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.759788   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.259753   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.759841   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.259450   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.759932   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.259395   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.759855   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.259739   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.759436   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.258951   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.759931   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.259588   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.759651   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:00.259461   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:00.759148   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.259596   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.759943   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.259670   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.759900   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.259745   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.759843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.259902   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.759850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.259624   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.759258   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.259346   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.759041   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.259467   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.759164   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.259047   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.759959   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.259372   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.759259   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:10.259845   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:10.759671   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.259895   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.759877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.259003   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.759685   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.759844   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.259541   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.759709   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:15.259558   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:15.759585   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.259850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.760009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.259385   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.759208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.259218   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.759779   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.259666   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.759781   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:20.259286   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:20.759048   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.259801   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.759595   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.259582   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.759871   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.259349   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.759659   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.259964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.759899   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:25.259559   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:25.759773   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.759924   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.259509   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.759986   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.259792   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.759564   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:29.259060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:29.259143   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:29.298974   80857 cri.go:89] found id: ""
	I0717 18:41:29.299006   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.299016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:29.299024   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:29.299087   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:29.333764   80857 cri.go:89] found id: ""
	I0717 18:41:29.333786   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.333793   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:29.333801   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:29.333849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:29.369639   80857 cri.go:89] found id: ""
	I0717 18:41:29.369674   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.369688   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:29.369697   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:29.369762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:29.403453   80857 cri.go:89] found id: ""
	I0717 18:41:29.403481   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.403489   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:29.403498   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:29.403555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:29.436662   80857 cri.go:89] found id: ""
	I0717 18:41:29.436687   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.436695   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:29.436701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:29.436749   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:29.471013   80857 cri.go:89] found id: ""
	I0717 18:41:29.471053   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.471064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:29.471074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:29.471139   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:29.502754   80857 cri.go:89] found id: ""
	I0717 18:41:29.502780   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.502787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:29.502793   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:29.502842   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:29.534205   80857 cri.go:89] found id: ""
	I0717 18:41:29.534232   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.534239   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:29.534247   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:29.534259   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:29.585406   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:29.585438   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:29.600629   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:29.600660   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:29.719788   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:29.719807   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:29.719819   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:29.785626   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:29.785662   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:32.325522   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:32.338046   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:32.338120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:32.370073   80857 cri.go:89] found id: ""
	I0717 18:41:32.370099   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.370106   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:32.370112   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:32.370165   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:32.408764   80857 cri.go:89] found id: ""
	I0717 18:41:32.408789   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.408799   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:32.408806   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:32.408862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:32.449078   80857 cri.go:89] found id: ""
	I0717 18:41:32.449108   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.449118   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:32.449125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:32.449176   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:32.481990   80857 cri.go:89] found id: ""
	I0717 18:41:32.482015   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.482022   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:32.482028   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:32.482077   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:32.521902   80857 cri.go:89] found id: ""
	I0717 18:41:32.521932   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.521942   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:32.521949   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:32.521997   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:32.554148   80857 cri.go:89] found id: ""
	I0717 18:41:32.554177   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.554206   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:32.554216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:32.554270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:32.587342   80857 cri.go:89] found id: ""
	I0717 18:41:32.587366   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.587374   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:32.587379   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:32.587425   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:32.619227   80857 cri.go:89] found id: ""
	I0717 18:41:32.619259   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.619270   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:32.619281   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:32.619296   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:32.669085   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:32.669124   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:32.682464   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:32.682500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:32.749218   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:32.749234   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:32.749245   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:32.814510   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:32.814545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:35.362866   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:35.375563   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:35.375643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:35.412355   80857 cri.go:89] found id: ""
	I0717 18:41:35.412380   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.412388   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:35.412393   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:35.412439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:35.446596   80857 cri.go:89] found id: ""
	I0717 18:41:35.446621   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.446629   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:35.446634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:35.446691   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:35.481695   80857 cri.go:89] found id: ""
	I0717 18:41:35.481717   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.481725   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:35.481730   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:35.481783   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:35.514528   80857 cri.go:89] found id: ""
	I0717 18:41:35.514573   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.514584   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:35.514592   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:35.514657   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:35.547831   80857 cri.go:89] found id: ""
	I0717 18:41:35.547858   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.547871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:35.547879   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:35.547941   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:35.579059   80857 cri.go:89] found id: ""
	I0717 18:41:35.579084   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.579097   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:35.579104   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:35.579164   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:35.616442   80857 cri.go:89] found id: ""
	I0717 18:41:35.616480   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.616487   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:35.616492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:35.616545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:35.647535   80857 cri.go:89] found id: ""
	I0717 18:41:35.647564   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.647571   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:35.647579   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:35.647595   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:35.696664   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:35.696692   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:35.710474   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:35.710499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:35.785569   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:35.785595   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:35.785611   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:35.865750   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:35.865785   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:38.405391   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:38.417737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:38.417806   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:38.453848   80857 cri.go:89] found id: ""
	I0717 18:41:38.453877   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.453888   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:38.453895   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:38.453949   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:38.487083   80857 cri.go:89] found id: ""
	I0717 18:41:38.487112   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.487122   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:38.487129   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:38.487190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:38.517700   80857 cri.go:89] found id: ""
	I0717 18:41:38.517729   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.517738   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:38.517746   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:38.517808   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:38.547587   80857 cri.go:89] found id: ""
	I0717 18:41:38.547616   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.547625   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:38.547632   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:38.547780   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:38.581511   80857 cri.go:89] found id: ""
	I0717 18:41:38.581535   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.581542   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:38.581548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:38.581675   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:38.618308   80857 cri.go:89] found id: ""
	I0717 18:41:38.618327   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.618334   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:38.618340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:38.618401   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:38.658237   80857 cri.go:89] found id: ""
	I0717 18:41:38.658267   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.658278   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:38.658298   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:38.658359   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:38.694044   80857 cri.go:89] found id: ""
	I0717 18:41:38.694071   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.694080   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:38.694090   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:38.694106   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:38.746621   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:38.746658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:38.758781   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:38.758805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:38.827327   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:38.827345   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:38.827357   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:38.899731   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:38.899762   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:41.437479   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:41.451264   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:41.451336   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:41.489053   80857 cri.go:89] found id: ""
	I0717 18:41:41.489083   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.489093   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:41.489101   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:41.489162   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:41.521954   80857 cri.go:89] found id: ""
	I0717 18:41:41.521985   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.521996   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:41.522003   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:41.522068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:41.556847   80857 cri.go:89] found id: ""
	I0717 18:41:41.556875   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.556884   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:41.556893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:41.557024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:41.591232   80857 cri.go:89] found id: ""
	I0717 18:41:41.591255   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.591263   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:41.591269   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:41.591315   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:41.624533   80857 cri.go:89] found id: ""
	I0717 18:41:41.624565   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.624576   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:41.624583   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:41.624644   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:41.656033   80857 cri.go:89] found id: ""
	I0717 18:41:41.656063   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.656073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:41.656080   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:41.656140   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:41.691686   80857 cri.go:89] found id: ""
	I0717 18:41:41.691715   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.691725   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:41.691732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:41.691789   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:41.724688   80857 cri.go:89] found id: ""
	I0717 18:41:41.724718   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.724729   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:41.724741   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:41.724760   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:41.802855   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:41.802882   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:41.839242   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:41.839271   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:41.889028   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:41.889058   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:41.901598   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:41.901627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:41.972632   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.472824   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:44.487673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:44.487745   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:44.530173   80857 cri.go:89] found id: ""
	I0717 18:41:44.530204   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.530216   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:44.530224   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:44.530288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:44.577865   80857 cri.go:89] found id: ""
	I0717 18:41:44.577891   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.577899   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:44.577905   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:44.577967   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:44.621528   80857 cri.go:89] found id: ""
	I0717 18:41:44.621551   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.621559   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:44.621564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:44.621622   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:44.655456   80857 cri.go:89] found id: ""
	I0717 18:41:44.655488   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.655498   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:44.655505   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:44.655570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:44.688729   80857 cri.go:89] found id: ""
	I0717 18:41:44.688757   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.688767   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:44.688774   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:44.688832   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:44.720190   80857 cri.go:89] found id: ""
	I0717 18:41:44.720220   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.720231   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:44.720238   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:44.720294   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:44.750109   80857 cri.go:89] found id: ""
	I0717 18:41:44.750135   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.750142   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:44.750147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:44.750203   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:44.780039   80857 cri.go:89] found id: ""
	I0717 18:41:44.780066   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.780090   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:44.780098   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:44.780111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:44.829641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:44.829675   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:44.842587   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:44.842616   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:44.906331   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.906355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:44.906369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:44.983364   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:44.983400   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:47.525057   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:47.538586   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:47.538639   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:47.574805   80857 cri.go:89] found id: ""
	I0717 18:41:47.574832   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.574843   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:47.574849   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:47.574906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:47.609576   80857 cri.go:89] found id: ""
	I0717 18:41:47.609603   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.609611   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:47.609617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:47.609662   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:47.643899   80857 cri.go:89] found id: ""
	I0717 18:41:47.643927   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.643936   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:47.643941   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:47.643990   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:47.680365   80857 cri.go:89] found id: ""
	I0717 18:41:47.680404   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.680412   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:47.680418   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:47.680475   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:47.719038   80857 cri.go:89] found id: ""
	I0717 18:41:47.719061   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.719069   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:47.719074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:47.719118   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:47.751708   80857 cri.go:89] found id: ""
	I0717 18:41:47.751735   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.751744   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:47.751750   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:47.751807   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:47.789803   80857 cri.go:89] found id: ""
	I0717 18:41:47.789838   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.789850   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:47.789858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:47.789921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:47.821450   80857 cri.go:89] found id: ""
	I0717 18:41:47.821477   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.821487   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:47.821496   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:47.821511   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:47.886501   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:47.886526   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:47.886544   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:47.960142   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:47.960177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:47.995012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:47.995046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:48.046848   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:48.046884   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:50.560990   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:50.574906   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:50.575051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:50.607647   80857 cri.go:89] found id: ""
	I0717 18:41:50.607674   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.607687   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:50.607696   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:50.607756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:50.640621   80857 cri.go:89] found id: ""
	I0717 18:41:50.640651   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.640660   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:50.640667   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:50.640741   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:50.675269   80857 cri.go:89] found id: ""
	I0717 18:41:50.675293   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.675303   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:50.675313   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:50.675369   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:50.707915   80857 cri.go:89] found id: ""
	I0717 18:41:50.707938   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.707946   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:50.707951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:50.708006   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:50.741149   80857 cri.go:89] found id: ""
	I0717 18:41:50.741170   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.741178   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:50.741184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:50.741288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:50.772768   80857 cri.go:89] found id: ""
	I0717 18:41:50.772792   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.772799   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:50.772804   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:50.772854   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:50.804996   80857 cri.go:89] found id: ""
	I0717 18:41:50.805018   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.805028   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:50.805035   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:50.805094   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:50.838933   80857 cri.go:89] found id: ""
	I0717 18:41:50.838960   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.838971   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:50.838982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:50.838997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:50.886415   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:50.886444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:50.899024   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:50.899049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:50.965388   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:50.965416   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:50.965434   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:51.044449   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:51.044490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.580749   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:53.593759   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:53.593841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:53.626541   80857 cri.go:89] found id: ""
	I0717 18:41:53.626573   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.626582   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:53.626588   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:53.626645   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:53.658492   80857 cri.go:89] found id: ""
	I0717 18:41:53.658520   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.658529   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:53.658537   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:53.658600   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:53.694546   80857 cri.go:89] found id: ""
	I0717 18:41:53.694582   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.694590   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:53.694595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:53.694650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:53.727028   80857 cri.go:89] found id: ""
	I0717 18:41:53.727053   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.727061   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:53.727067   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:53.727129   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:53.762869   80857 cri.go:89] found id: ""
	I0717 18:41:53.762897   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.762906   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:53.762913   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:53.762976   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:53.794133   80857 cri.go:89] found id: ""
	I0717 18:41:53.794158   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.794166   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:53.794172   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:53.794225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:53.828432   80857 cri.go:89] found id: ""
	I0717 18:41:53.828463   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.828473   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:53.828484   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:53.828546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:53.863316   80857 cri.go:89] found id: ""
	I0717 18:41:53.863345   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.863353   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:53.863362   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:53.863384   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.897353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:53.897380   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:53.944213   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:53.944242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:53.957484   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:53.957509   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:54.025962   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:54.025992   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:54.026006   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:56.609502   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:56.621849   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:56.621913   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:56.657469   80857 cri.go:89] found id: ""
	I0717 18:41:56.657498   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.657510   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:56.657517   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:56.657579   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:56.691298   80857 cri.go:89] found id: ""
	I0717 18:41:56.691320   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.691327   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:56.691332   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:56.691386   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:56.723305   80857 cri.go:89] found id: ""
	I0717 18:41:56.723334   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.723344   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:56.723352   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:56.723417   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:56.755893   80857 cri.go:89] found id: ""
	I0717 18:41:56.755918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.755926   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:56.755931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:56.755982   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:56.787777   80857 cri.go:89] found id: ""
	I0717 18:41:56.787807   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.787819   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:56.787828   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:56.787894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:56.821126   80857 cri.go:89] found id: ""
	I0717 18:41:56.821152   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.821163   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:56.821170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:56.821228   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:56.855894   80857 cri.go:89] found id: ""
	I0717 18:41:56.855918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.855926   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:56.855931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:56.855980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:56.893483   80857 cri.go:89] found id: ""
	I0717 18:41:56.893505   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.893512   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:56.893521   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:56.893532   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:56.945355   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:56.945385   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:56.958426   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:56.958451   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:57.025542   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:57.025571   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:57.025585   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:57.100497   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:57.100528   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:59.636400   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:59.648517   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:59.648571   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:59.683954   80857 cri.go:89] found id: ""
	I0717 18:41:59.683978   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.683988   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:59.683995   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:59.684065   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:59.719135   80857 cri.go:89] found id: ""
	I0717 18:41:59.719162   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.719172   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:59.719179   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:59.719243   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:59.755980   80857 cri.go:89] found id: ""
	I0717 18:41:59.756012   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.756023   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:59.756030   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:59.756091   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:59.788147   80857 cri.go:89] found id: ""
	I0717 18:41:59.788176   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.788185   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:59.788191   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:59.788239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:59.819646   80857 cri.go:89] found id: ""
	I0717 18:41:59.819670   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.819679   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:59.819685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:59.819738   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:59.852487   80857 cri.go:89] found id: ""
	I0717 18:41:59.852508   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.852516   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:59.852521   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:59.852586   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:59.883761   80857 cri.go:89] found id: ""
	I0717 18:41:59.883794   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.883805   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:59.883812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:59.883870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:59.914854   80857 cri.go:89] found id: ""
	I0717 18:41:59.914882   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.914889   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:59.914896   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:59.914909   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:59.995619   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:59.995650   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:00.034444   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:00.034472   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:00.084278   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:00.084308   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:00.097771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:00.097796   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:00.161753   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:02.662134   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:02.676200   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:02.676277   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:02.711606   80857 cri.go:89] found id: ""
	I0717 18:42:02.711640   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.711652   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:02.711659   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:02.711711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:02.744704   80857 cri.go:89] found id: ""
	I0717 18:42:02.744728   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.744735   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:02.744741   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:02.744800   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:02.778815   80857 cri.go:89] found id: ""
	I0717 18:42:02.778846   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.778859   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:02.778868   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:02.778936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:02.810896   80857 cri.go:89] found id: ""
	I0717 18:42:02.810928   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.810941   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:02.810950   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:02.811024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:02.843868   80857 cri.go:89] found id: ""
	I0717 18:42:02.843892   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.843903   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:02.843910   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:02.843972   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:02.876311   80857 cri.go:89] found id: ""
	I0717 18:42:02.876338   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.876348   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:02.876356   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:02.876420   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:02.910752   80857 cri.go:89] found id: ""
	I0717 18:42:02.910776   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.910784   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:02.910789   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:02.910835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:02.947286   80857 cri.go:89] found id: ""
	I0717 18:42:02.947318   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.947328   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:02.947337   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:02.947351   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:02.999512   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:02.999542   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:03.014063   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:03.014094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:03.081822   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:03.081844   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:03.081858   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:03.161088   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:03.161117   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:05.699198   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:05.711597   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:05.711654   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:05.749653   80857 cri.go:89] found id: ""
	I0717 18:42:05.749684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.749694   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:05.749703   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:05.749757   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:05.785095   80857 cri.go:89] found id: ""
	I0717 18:42:05.785118   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.785125   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:05.785134   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:05.785179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:05.818085   80857 cri.go:89] found id: ""
	I0717 18:42:05.818111   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.818119   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:05.818125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:05.818171   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:05.851872   80857 cri.go:89] found id: ""
	I0717 18:42:05.851895   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.851902   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:05.851907   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:05.851958   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:05.883924   80857 cri.go:89] found id: ""
	I0717 18:42:05.883948   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.883958   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:05.883965   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:05.884025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:05.916365   80857 cri.go:89] found id: ""
	I0717 18:42:05.916396   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.916407   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:05.916414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:05.916473   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:05.950656   80857 cri.go:89] found id: ""
	I0717 18:42:05.950684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.950695   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:05.950701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:05.950762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:05.992132   80857 cri.go:89] found id: ""
	I0717 18:42:05.992160   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.992169   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:05.992177   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:05.992190   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:06.042162   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:06.042192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:06.055594   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:06.055619   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:06.123007   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:06.123038   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:06.123068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:06.200429   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:06.200460   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:08.739039   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:08.751520   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:08.751575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:08.783765   80857 cri.go:89] found id: ""
	I0717 18:42:08.783794   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.783805   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:08.783812   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:08.783864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:08.815200   80857 cri.go:89] found id: ""
	I0717 18:42:08.815227   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.815236   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:08.815242   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:08.815289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:08.848970   80857 cri.go:89] found id: ""
	I0717 18:42:08.849002   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.849012   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:08.849021   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:08.849084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:08.881832   80857 cri.go:89] found id: ""
	I0717 18:42:08.881859   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.881866   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:08.881874   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:08.881922   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:08.913119   80857 cri.go:89] found id: ""
	I0717 18:42:08.913142   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.913149   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:08.913155   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:08.913201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:08.947471   80857 cri.go:89] found id: ""
	I0717 18:42:08.947499   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.947509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:08.947515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:08.947570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:08.979570   80857 cri.go:89] found id: ""
	I0717 18:42:08.979599   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.979609   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:08.979615   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:08.979670   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:09.012960   80857 cri.go:89] found id: ""
	I0717 18:42:09.012991   80857 logs.go:276] 0 containers: []
	W0717 18:42:09.013002   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:09.013012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:09.013027   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:09.065732   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:09.065769   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:09.079572   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:09.079602   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:09.151737   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:09.151754   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:09.151766   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:09.230185   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:09.230218   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:11.767189   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:11.780044   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:11.780115   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:11.812700   80857 cri.go:89] found id: ""
	I0717 18:42:11.812722   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.812730   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:11.812736   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:11.812781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:11.846855   80857 cri.go:89] found id: ""
	I0717 18:42:11.846883   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.846893   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:11.846900   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:11.846962   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:11.877671   80857 cri.go:89] found id: ""
	I0717 18:42:11.877700   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.877710   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:11.877716   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:11.877767   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:11.908703   80857 cri.go:89] found id: ""
	I0717 18:42:11.908728   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.908735   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:11.908740   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:11.908786   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:11.942191   80857 cri.go:89] found id: ""
	I0717 18:42:11.942218   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.942225   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:11.942231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:11.942284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:11.974751   80857 cri.go:89] found id: ""
	I0717 18:42:11.974782   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.974798   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:11.974807   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:11.974876   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:12.006287   80857 cri.go:89] found id: ""
	I0717 18:42:12.006317   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.006327   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:12.006335   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:12.006396   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:12.036524   80857 cri.go:89] found id: ""
	I0717 18:42:12.036546   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.036554   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:12.036575   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:12.036599   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:12.085073   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:12.085109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:12.098908   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:12.098937   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:12.161665   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:12.161687   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:12.161702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:12.240349   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:12.240401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:14.781101   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:14.794081   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:14.794149   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:14.828975   80857 cri.go:89] found id: ""
	I0717 18:42:14.829003   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.829013   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:14.829021   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:14.829072   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:14.864858   80857 cri.go:89] found id: ""
	I0717 18:42:14.864886   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.864896   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:14.864903   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:14.864986   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:14.897961   80857 cri.go:89] found id: ""
	I0717 18:42:14.897983   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.897991   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:14.897996   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:14.898041   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:14.935499   80857 cri.go:89] found id: ""
	I0717 18:42:14.935521   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.935529   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:14.935534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:14.935591   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:14.967581   80857 cri.go:89] found id: ""
	I0717 18:42:14.967605   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.967621   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:14.967629   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:14.967688   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:15.001844   80857 cri.go:89] found id: ""
	I0717 18:42:15.001876   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.001888   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:15.001894   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:15.001942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:15.038940   80857 cri.go:89] found id: ""
	I0717 18:42:15.038967   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.038977   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:15.038985   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:15.039043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:15.072636   80857 cri.go:89] found id: ""
	I0717 18:42:15.072665   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.072677   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:15.072688   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:15.072703   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:15.124889   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:15.124934   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:15.138661   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:15.138691   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:15.208762   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:15.208791   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:15.208806   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:15.281302   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:15.281336   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:17.817136   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:17.831013   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:17.831078   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:17.867065   80857 cri.go:89] found id: ""
	I0717 18:42:17.867091   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.867101   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:17.867108   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:17.867166   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:17.904143   80857 cri.go:89] found id: ""
	I0717 18:42:17.904171   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.904180   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:17.904188   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:17.904248   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:17.937450   80857 cri.go:89] found id: ""
	I0717 18:42:17.937478   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.937487   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:17.937492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:17.937556   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:17.970650   80857 cri.go:89] found id: ""
	I0717 18:42:17.970679   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.970689   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:17.970696   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:17.970754   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:18.002329   80857 cri.go:89] found id: ""
	I0717 18:42:18.002355   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.002364   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:18.002371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:18.002430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:18.035253   80857 cri.go:89] found id: ""
	I0717 18:42:18.035278   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.035288   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:18.035295   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:18.035356   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:18.070386   80857 cri.go:89] found id: ""
	I0717 18:42:18.070419   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.070431   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:18.070439   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:18.070507   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:18.106148   80857 cri.go:89] found id: ""
	I0717 18:42:18.106170   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.106177   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:18.106185   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:18.106201   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:18.157359   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:18.157390   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:18.171757   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:18.171782   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:18.242795   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:18.242818   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:18.242831   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:18.316221   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:18.316255   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:20.857953   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:20.870813   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:20.870882   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:20.906033   80857 cri.go:89] found id: ""
	I0717 18:42:20.906065   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.906075   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:20.906083   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:20.906142   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:20.942292   80857 cri.go:89] found id: ""
	I0717 18:42:20.942316   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.942335   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:20.942342   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:20.942403   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:20.985113   80857 cri.go:89] found id: ""
	I0717 18:42:20.985143   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.985151   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:20.985157   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:20.985217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:21.021807   80857 cri.go:89] found id: ""
	I0717 18:42:21.021834   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.021842   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:21.021847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:21.021906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:21.061924   80857 cri.go:89] found id: ""
	I0717 18:42:21.061949   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.061961   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:21.061969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:21.062025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:21.098890   80857 cri.go:89] found id: ""
	I0717 18:42:21.098916   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.098927   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:21.098935   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:21.098991   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:21.132576   80857 cri.go:89] found id: ""
	I0717 18:42:21.132612   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.132621   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:21.132627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:21.132687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:21.167723   80857 cri.go:89] found id: ""
	I0717 18:42:21.167765   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.167778   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:21.167788   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:21.167803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:21.220427   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:21.220461   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:21.233191   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:21.233216   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:21.304462   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:21.304481   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:21.304498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:21.386887   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:21.386925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:23.926518   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:23.940470   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:23.940534   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:23.976739   80857 cri.go:89] found id: ""
	I0717 18:42:23.976763   80857 logs.go:276] 0 containers: []
	W0717 18:42:23.976773   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:23.976778   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:23.976838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:24.007575   80857 cri.go:89] found id: ""
	I0717 18:42:24.007603   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.007612   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:24.007617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:24.007671   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:24.040430   80857 cri.go:89] found id: ""
	I0717 18:42:24.040455   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.040463   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:24.040468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:24.040581   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:24.071602   80857 cri.go:89] found id: ""
	I0717 18:42:24.071629   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.071638   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:24.071644   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:24.071705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:24.109570   80857 cri.go:89] found id: ""
	I0717 18:42:24.109595   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.109602   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:24.109607   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:24.109667   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:24.144284   80857 cri.go:89] found id: ""
	I0717 18:42:24.144305   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.144328   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:24.144333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:24.144382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:24.179441   80857 cri.go:89] found id: ""
	I0717 18:42:24.179467   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.179474   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:24.179479   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:24.179545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:24.222100   80857 cri.go:89] found id: ""
	I0717 18:42:24.222133   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.222143   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:24.222159   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:24.222175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:24.273181   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:24.273215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:24.285835   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:24.285861   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:24.357804   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:24.357826   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:24.357839   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:24.437270   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:24.437310   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:26.979543   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:26.992443   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:26.992497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:27.025520   80857 cri.go:89] found id: ""
	I0717 18:42:27.025548   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.025560   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:27.025567   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:27.025630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:27.059971   80857 cri.go:89] found id: ""
	I0717 18:42:27.060002   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.060011   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:27.060016   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:27.060068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:27.091370   80857 cri.go:89] found id: ""
	I0717 18:42:27.091397   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.091407   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:27.091415   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:27.091468   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:27.123736   80857 cri.go:89] found id: ""
	I0717 18:42:27.123768   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.123779   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:27.123786   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:27.123849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:27.156155   80857 cri.go:89] found id: ""
	I0717 18:42:27.156177   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.156185   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:27.156190   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:27.156239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:27.190701   80857 cri.go:89] found id: ""
	I0717 18:42:27.190729   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.190741   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:27.190749   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:27.190825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:27.222093   80857 cri.go:89] found id: ""
	I0717 18:42:27.222119   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.222130   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:27.222137   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:27.222199   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:27.258789   80857 cri.go:89] found id: ""
	I0717 18:42:27.258813   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.258824   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:27.258834   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:27.258848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:27.307033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:27.307068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:27.321181   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:27.321209   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:27.390560   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:27.390593   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:27.390613   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:27.464352   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:27.464389   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:30.005732   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:30.019088   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:30.019160   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:30.052733   80857 cri.go:89] found id: ""
	I0717 18:42:30.052757   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.052765   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:30.052775   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:30.052836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:30.087683   80857 cri.go:89] found id: ""
	I0717 18:42:30.087711   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.087722   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:30.087729   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:30.087774   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:30.124371   80857 cri.go:89] found id: ""
	I0717 18:42:30.124404   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.124416   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:30.124432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:30.124487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:30.160081   80857 cri.go:89] found id: ""
	I0717 18:42:30.160107   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.160115   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:30.160122   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:30.160173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:30.194420   80857 cri.go:89] found id: ""
	I0717 18:42:30.194447   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.194456   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:30.194464   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:30.194522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:30.229544   80857 cri.go:89] found id: ""
	I0717 18:42:30.229570   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.229584   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:30.229591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:30.229650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:30.264164   80857 cri.go:89] found id: ""
	I0717 18:42:30.264193   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.264204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:30.264211   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:30.264266   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:30.296958   80857 cri.go:89] found id: ""
	I0717 18:42:30.296986   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.296996   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:30.297008   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:30.297049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:30.348116   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:30.348145   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:30.361373   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:30.361401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:30.429601   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:30.429620   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:30.429634   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:30.507718   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:30.507752   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:33.045539   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:33.058149   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:33.058219   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:33.088675   80857 cri.go:89] found id: ""
	I0717 18:42:33.088702   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.088710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:33.088717   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:33.088773   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:33.121269   80857 cri.go:89] found id: ""
	I0717 18:42:33.121297   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.121308   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:33.121315   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:33.121375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:33.156144   80857 cri.go:89] found id: ""
	I0717 18:42:33.156173   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.156184   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:33.156192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:33.156257   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:33.188559   80857 cri.go:89] found id: ""
	I0717 18:42:33.188585   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.188597   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:33.188603   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:33.188651   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:33.219650   80857 cri.go:89] found id: ""
	I0717 18:42:33.219672   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.219680   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:33.219686   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:33.219746   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:33.249704   80857 cri.go:89] found id: ""
	I0717 18:42:33.249728   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.249737   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:33.249742   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:33.249793   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:33.283480   80857 cri.go:89] found id: ""
	I0717 18:42:33.283503   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.283511   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:33.283516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:33.283560   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:33.314577   80857 cri.go:89] found id: ""
	I0717 18:42:33.314620   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.314629   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:33.314638   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:33.314649   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:33.363458   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:33.363491   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:33.377240   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:33.377267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:33.442939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:33.442961   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:33.442976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:33.522422   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:33.522456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:36.063823   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:36.078272   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:36.078342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:36.111460   80857 cri.go:89] found id: ""
	I0717 18:42:36.111494   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.111502   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:36.111509   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:36.111562   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:36.144191   80857 cri.go:89] found id: ""
	I0717 18:42:36.144222   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.144232   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:36.144239   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:36.144306   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:36.177247   80857 cri.go:89] found id: ""
	I0717 18:42:36.177277   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.177288   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:36.177294   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:36.177350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:36.213390   80857 cri.go:89] found id: ""
	I0717 18:42:36.213419   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.213427   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:36.213433   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:36.213493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:36.246775   80857 cri.go:89] found id: ""
	I0717 18:42:36.246799   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.246807   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:36.246812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:36.246870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:36.282441   80857 cri.go:89] found id: ""
	I0717 18:42:36.282463   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.282470   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:36.282476   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:36.282529   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:36.314178   80857 cri.go:89] found id: ""
	I0717 18:42:36.314203   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.314211   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:36.314216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:36.314265   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:36.353705   80857 cri.go:89] found id: ""
	I0717 18:42:36.353730   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.353737   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:36.353746   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:36.353758   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:36.370866   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:36.370894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:36.463660   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:36.463693   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:36.463710   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:36.540337   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:36.540371   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:36.575770   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:36.575801   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.128675   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:39.141187   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:39.141255   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:39.175960   80857 cri.go:89] found id: ""
	I0717 18:42:39.175982   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.175989   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:39.175994   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:39.176051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:39.209442   80857 cri.go:89] found id: ""
	I0717 18:42:39.209472   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.209483   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:39.209490   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:39.209552   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:39.243225   80857 cri.go:89] found id: ""
	I0717 18:42:39.243249   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.243256   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:39.243262   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:39.243309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:39.277369   80857 cri.go:89] found id: ""
	I0717 18:42:39.277396   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.277407   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:39.277414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:39.277464   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:39.310522   80857 cri.go:89] found id: ""
	I0717 18:42:39.310552   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.310563   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:39.310570   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:39.310637   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:39.344186   80857 cri.go:89] found id: ""
	I0717 18:42:39.344208   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.344216   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:39.344221   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:39.344279   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:39.375329   80857 cri.go:89] found id: ""
	I0717 18:42:39.375354   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.375366   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:39.375372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:39.375419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:39.412629   80857 cri.go:89] found id: ""
	I0717 18:42:39.412659   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.412668   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:39.412679   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:39.412696   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:39.447607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:39.447644   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.498981   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:39.499013   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:39.512380   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:39.512409   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:39.580396   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:39.580415   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:39.580428   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:42.158145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:42.177450   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:42.177522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:42.222849   80857 cri.go:89] found id: ""
	I0717 18:42:42.222880   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.222890   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:42.222897   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:42.222954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:42.252712   80857 cri.go:89] found id: ""
	I0717 18:42:42.252742   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.252752   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:42.252757   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:42.252802   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:42.283764   80857 cri.go:89] found id: ""
	I0717 18:42:42.283789   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.283799   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:42.283806   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:42.283864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:42.317243   80857 cri.go:89] found id: ""
	I0717 18:42:42.317270   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.317281   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:42.317288   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:42.317350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:42.349972   80857 cri.go:89] found id: ""
	I0717 18:42:42.350000   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.350010   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:42.350017   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:42.350074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:42.382111   80857 cri.go:89] found id: ""
	I0717 18:42:42.382146   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.382158   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:42.382165   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:42.382223   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:42.414669   80857 cri.go:89] found id: ""
	I0717 18:42:42.414692   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.414700   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:42.414705   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:42.414765   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:42.446533   80857 cri.go:89] found id: ""
	I0717 18:42:42.446571   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.446579   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:42.446588   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:42.446603   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:42.522142   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:42.522165   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:42.522177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:42.602456   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:42.602493   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:42.642192   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:42.642221   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:42.695016   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:42.695046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:45.208310   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:45.221821   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:45.221901   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:45.256887   80857 cri.go:89] found id: ""
	I0717 18:42:45.256914   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.256924   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:45.256930   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:45.256999   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:45.293713   80857 cri.go:89] found id: ""
	I0717 18:42:45.293735   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.293748   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:45.293753   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:45.293799   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:45.328790   80857 cri.go:89] found id: ""
	I0717 18:42:45.328815   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.328824   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:45.328833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:45.328880   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:45.364977   80857 cri.go:89] found id: ""
	I0717 18:42:45.365004   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.365014   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:45.365022   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:45.365084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:45.401131   80857 cri.go:89] found id: ""
	I0717 18:42:45.401157   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.401164   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:45.401170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:45.401217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:45.432252   80857 cri.go:89] found id: ""
	I0717 18:42:45.432279   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.432287   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:45.432293   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:45.432338   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:45.464636   80857 cri.go:89] found id: ""
	I0717 18:42:45.464659   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.464667   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:45.464674   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:45.464728   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:45.494884   80857 cri.go:89] found id: ""
	I0717 18:42:45.494913   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.494924   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:45.494935   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:45.494949   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:45.546578   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:45.546610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:45.559622   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:45.559647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:45.622094   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:45.622114   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:45.622126   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:45.699772   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:45.699814   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.241667   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:48.254205   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:48.254270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:48.293258   80857 cri.go:89] found id: ""
	I0717 18:42:48.293287   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.293298   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:48.293305   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:48.293362   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:48.328778   80857 cri.go:89] found id: ""
	I0717 18:42:48.328807   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.328818   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:48.328824   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:48.328884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:48.360230   80857 cri.go:89] found id: ""
	I0717 18:42:48.360256   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.360266   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:48.360276   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:48.360335   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:48.397770   80857 cri.go:89] found id: ""
	I0717 18:42:48.397797   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.397808   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:48.397815   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:48.397873   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:48.430912   80857 cri.go:89] found id: ""
	I0717 18:42:48.430938   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.430946   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:48.430956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:48.431015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:48.462659   80857 cri.go:89] found id: ""
	I0717 18:42:48.462688   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.462699   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:48.462706   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:48.462771   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:48.497554   80857 cri.go:89] found id: ""
	I0717 18:42:48.497584   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.497594   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:48.497601   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:48.497665   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:48.529524   80857 cri.go:89] found id: ""
	I0717 18:42:48.529547   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.529555   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:48.529564   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:48.529577   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:48.601265   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:48.601285   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:48.601297   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:48.678045   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:48.678075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.718565   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:48.718598   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:48.769923   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:48.769956   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:51.282887   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:51.295778   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:51.295848   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:51.329324   80857 cri.go:89] found id: ""
	I0717 18:42:51.329351   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.329361   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:51.329369   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:51.329434   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:51.362013   80857 cri.go:89] found id: ""
	I0717 18:42:51.362042   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.362052   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:51.362059   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:51.362120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:51.395039   80857 cri.go:89] found id: ""
	I0717 18:42:51.395069   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.395080   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:51.395087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:51.395155   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:51.427683   80857 cri.go:89] found id: ""
	I0717 18:42:51.427709   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.427717   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:51.427722   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:51.427772   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:51.461683   80857 cri.go:89] found id: ""
	I0717 18:42:51.461706   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.461718   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:51.461723   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:51.461769   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:51.495780   80857 cri.go:89] found id: ""
	I0717 18:42:51.495802   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.495810   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:51.495816   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:51.495867   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:51.527541   80857 cri.go:89] found id: ""
	I0717 18:42:51.527573   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.527583   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:51.527591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:51.527648   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:51.567947   80857 cri.go:89] found id: ""
	I0717 18:42:51.567975   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.567987   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:51.567997   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:51.568014   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:51.620083   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:51.620109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:51.632823   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:51.632848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:51.705731   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:51.705753   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:51.705767   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:51.781969   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:51.782005   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.318011   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:54.331886   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:54.331942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:54.362935   80857 cri.go:89] found id: ""
	I0717 18:42:54.362962   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.362972   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:54.362979   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:54.363032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:54.396153   80857 cri.go:89] found id: ""
	I0717 18:42:54.396180   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.396191   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:54.396198   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:54.396259   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:54.433123   80857 cri.go:89] found id: ""
	I0717 18:42:54.433150   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.433160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:54.433168   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:54.433224   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:54.465034   80857 cri.go:89] found id: ""
	I0717 18:42:54.465064   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.465079   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:54.465087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:54.465200   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:54.496200   80857 cri.go:89] found id: ""
	I0717 18:42:54.496250   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.496263   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:54.496271   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:54.496332   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:54.528618   80857 cri.go:89] found id: ""
	I0717 18:42:54.528646   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.528656   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:54.528664   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:54.528724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:54.563018   80857 cri.go:89] found id: ""
	I0717 18:42:54.563042   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.563052   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:54.563059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:54.563114   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:54.595221   80857 cri.go:89] found id: ""
	I0717 18:42:54.595256   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.595266   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:54.595275   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:54.595291   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:54.608193   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:54.608220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:54.673755   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:54.673778   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:54.673793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:54.756443   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:54.756483   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.792670   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:54.792700   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:57.344637   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:57.357003   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:57.357068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:57.389230   80857 cri.go:89] found id: ""
	I0717 18:42:57.389261   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.389271   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:57.389278   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:57.389372   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:57.421529   80857 cri.go:89] found id: ""
	I0717 18:42:57.421553   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.421571   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:57.421578   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:57.421642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:57.455154   80857 cri.go:89] found id: ""
	I0717 18:42:57.455186   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.455193   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:57.455199   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:57.455245   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:57.490576   80857 cri.go:89] found id: ""
	I0717 18:42:57.490608   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.490621   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:57.490630   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:57.490693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:57.523972   80857 cri.go:89] found id: ""
	I0717 18:42:57.524010   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.524023   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:57.524033   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:57.524092   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:57.558106   80857 cri.go:89] found id: ""
	I0717 18:42:57.558132   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.558140   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:57.558145   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:57.558201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:57.591009   80857 cri.go:89] found id: ""
	I0717 18:42:57.591035   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.591045   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:57.591051   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:57.591110   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:57.624564   80857 cri.go:89] found id: ""
	I0717 18:42:57.624592   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.624601   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:57.624612   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:57.624627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:57.699833   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:57.699868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:57.737029   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:57.737066   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:57.790562   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:57.790605   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:57.804935   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:57.804984   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:57.873081   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:00.374166   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:00.388370   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:00.388443   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:00.421228   80857 cri.go:89] found id: ""
	I0717 18:43:00.421257   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.421268   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:00.421276   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:00.421325   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:00.451819   80857 cri.go:89] found id: ""
	I0717 18:43:00.451846   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.451856   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:00.451862   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:00.451917   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:00.482960   80857 cri.go:89] found id: ""
	I0717 18:43:00.482993   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.483004   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:00.483015   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:00.483074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:00.515860   80857 cri.go:89] found id: ""
	I0717 18:43:00.515882   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.515892   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:00.515899   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:00.515954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:00.548177   80857 cri.go:89] found id: ""
	I0717 18:43:00.548202   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.548212   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:00.548217   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:00.548275   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:00.580759   80857 cri.go:89] found id: ""
	I0717 18:43:00.580782   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.580790   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:00.580795   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:00.580847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:00.618661   80857 cri.go:89] found id: ""
	I0717 18:43:00.618683   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.618691   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:00.618699   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:00.618742   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:00.650503   80857 cri.go:89] found id: ""
	I0717 18:43:00.650528   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.650535   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:00.650544   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:00.650555   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:00.699668   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:00.699697   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:00.714086   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:00.714114   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:00.777051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:00.777087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:00.777105   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:00.859238   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:00.859274   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:03.399050   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:03.412565   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:03.412626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:03.445993   80857 cri.go:89] found id: ""
	I0717 18:43:03.446026   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.446038   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:03.446045   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:03.446101   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:03.481251   80857 cri.go:89] found id: ""
	I0717 18:43:03.481285   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.481297   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:03.481305   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:03.481371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:03.514406   80857 cri.go:89] found id: ""
	I0717 18:43:03.514433   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.514441   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:03.514447   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:03.514497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:03.546217   80857 cri.go:89] found id: ""
	I0717 18:43:03.546248   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.546258   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:03.546266   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:03.546327   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:03.577287   80857 cri.go:89] found id: ""
	I0717 18:43:03.577318   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.577333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:03.577340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:03.577394   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:03.610080   80857 cri.go:89] found id: ""
	I0717 18:43:03.610101   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.610109   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:03.610114   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:03.610159   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:03.643753   80857 cri.go:89] found id: ""
	I0717 18:43:03.643777   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.643787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:03.643792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:03.643849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:03.676290   80857 cri.go:89] found id: ""
	I0717 18:43:03.676338   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.676345   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:03.676353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:03.676364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:03.727818   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:03.727850   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:03.740752   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:03.740784   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:03.810465   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:03.810485   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:03.810499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:03.889326   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:03.889359   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:06.426949   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:06.440007   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:06.440079   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:06.471689   80857 cri.go:89] found id: ""
	I0717 18:43:06.471715   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.471724   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:06.471729   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:06.471775   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:06.503818   80857 cri.go:89] found id: ""
	I0717 18:43:06.503840   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.503847   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:06.503853   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:06.503900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:06.534733   80857 cri.go:89] found id: ""
	I0717 18:43:06.534755   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.534763   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:06.534768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:06.534818   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:06.565388   80857 cri.go:89] found id: ""
	I0717 18:43:06.565414   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.565421   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:06.565431   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:06.565480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:06.597739   80857 cri.go:89] found id: ""
	I0717 18:43:06.597764   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.597775   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:06.597782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:06.597847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:06.629823   80857 cri.go:89] found id: ""
	I0717 18:43:06.629845   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.629853   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:06.629859   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:06.629921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:06.663753   80857 cri.go:89] found id: ""
	I0717 18:43:06.663779   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.663787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:06.663792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:06.663838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:06.700868   80857 cri.go:89] found id: ""
	I0717 18:43:06.700896   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.700906   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:06.700917   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:06.700932   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:06.753064   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:06.753097   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:06.765845   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:06.765868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:06.834691   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:06.834715   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:06.834729   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:06.908650   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:06.908682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:09.450804   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:09.463369   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:09.463452   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:09.506992   80857 cri.go:89] found id: ""
	I0717 18:43:09.507020   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.507028   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:09.507035   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:09.507093   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:09.543083   80857 cri.go:89] found id: ""
	I0717 18:43:09.543108   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.543116   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:09.543121   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:09.543174   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:09.576194   80857 cri.go:89] found id: ""
	I0717 18:43:09.576219   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.576226   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:09.576231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:09.576289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:09.610148   80857 cri.go:89] found id: ""
	I0717 18:43:09.610171   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.610178   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:09.610184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:09.610258   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:09.642217   80857 cri.go:89] found id: ""
	I0717 18:43:09.642246   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.642255   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:09.642263   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:09.642342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:09.678041   80857 cri.go:89] found id: ""
	I0717 18:43:09.678064   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.678073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:09.678079   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:09.678141   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:09.711162   80857 cri.go:89] found id: ""
	I0717 18:43:09.711193   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.711204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:09.711212   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:09.711272   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:09.746135   80857 cri.go:89] found id: ""
	I0717 18:43:09.746164   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.746175   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:09.746186   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:09.746197   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:09.799268   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:09.799303   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:09.811910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:09.811935   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:09.876939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:09.876982   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:09.876998   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:09.951468   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:09.951502   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:12.488926   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:12.501054   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:12.501112   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:12.532536   80857 cri.go:89] found id: ""
	I0717 18:43:12.532569   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.532577   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:12.532582   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:12.532629   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:12.565102   80857 cri.go:89] found id: ""
	I0717 18:43:12.565130   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.565141   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:12.565148   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:12.565208   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:12.600262   80857 cri.go:89] found id: ""
	I0717 18:43:12.600299   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.600309   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:12.600316   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:12.600366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:12.633950   80857 cri.go:89] found id: ""
	I0717 18:43:12.633980   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.633991   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:12.633998   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:12.634054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:12.673297   80857 cri.go:89] found id: ""
	I0717 18:43:12.673325   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.673338   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:12.673345   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:12.673406   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:12.707112   80857 cri.go:89] found id: ""
	I0717 18:43:12.707136   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.707144   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:12.707150   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:12.707206   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:12.746323   80857 cri.go:89] found id: ""
	I0717 18:43:12.746348   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.746358   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:12.746372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:12.746433   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:12.779470   80857 cri.go:89] found id: ""
	I0717 18:43:12.779496   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.779507   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:12.779518   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:12.779534   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:12.830156   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:12.830178   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:12.843707   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:12.843734   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:12.911849   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:12.911875   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:12.911891   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:12.986090   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:12.986122   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:15.523428   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:15.536012   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:15.536070   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:15.569179   80857 cri.go:89] found id: ""
	I0717 18:43:15.569208   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.569218   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:15.569225   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:15.569273   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:15.606727   80857 cri.go:89] found id: ""
	I0717 18:43:15.606749   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.606757   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:15.606763   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:15.606805   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:15.638842   80857 cri.go:89] found id: ""
	I0717 18:43:15.638873   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.638883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:15.638889   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:15.638939   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:15.671418   80857 cri.go:89] found id: ""
	I0717 18:43:15.671444   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.671453   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:15.671459   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:15.671517   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:15.704892   80857 cri.go:89] found id: ""
	I0717 18:43:15.704928   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.704937   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:15.704956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:15.705013   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:15.738478   80857 cri.go:89] found id: ""
	I0717 18:43:15.738502   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.738509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:15.738515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:15.738584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:15.771188   80857 cri.go:89] found id: ""
	I0717 18:43:15.771225   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.771237   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:15.771245   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:15.771303   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:15.807737   80857 cri.go:89] found id: ""
	I0717 18:43:15.807763   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.807770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:15.807779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:15.807790   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:15.861202   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:15.861234   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:15.874170   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:15.874200   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:15.938049   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:15.938073   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:15.938086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:16.025420   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:16.025456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:18.563320   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:18.575574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:18.575634   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:18.608673   80857 cri.go:89] found id: ""
	I0717 18:43:18.608700   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.608710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:18.608718   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:18.608782   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:18.641589   80857 cri.go:89] found id: ""
	I0717 18:43:18.641611   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.641618   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:18.641624   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:18.641679   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:18.672232   80857 cri.go:89] found id: ""
	I0717 18:43:18.672258   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.672268   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:18.672274   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:18.672331   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:18.706088   80857 cri.go:89] found id: ""
	I0717 18:43:18.706111   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.706118   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:18.706134   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:18.706179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:18.742475   80857 cri.go:89] found id: ""
	I0717 18:43:18.742503   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.742512   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:18.742518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:18.742575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:18.774141   80857 cri.go:89] found id: ""
	I0717 18:43:18.774169   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.774178   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:18.774183   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:18.774234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:18.806648   80857 cri.go:89] found id: ""
	I0717 18:43:18.806672   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.806679   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:18.806685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:18.806731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:18.838022   80857 cri.go:89] found id: ""
	I0717 18:43:18.838047   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.838054   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:18.838062   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:18.838076   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:18.903467   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:18.903487   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:18.903498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:18.980385   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:18.980432   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:19.020884   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:19.020914   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:19.073530   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:19.073574   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:21.587870   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:21.602130   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:21.602185   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:21.635373   80857 cri.go:89] found id: ""
	I0717 18:43:21.635401   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.635411   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:21.635418   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:21.635480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:21.667175   80857 cri.go:89] found id: ""
	I0717 18:43:21.667200   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.667209   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:21.667216   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:21.667267   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:21.705876   80857 cri.go:89] found id: ""
	I0717 18:43:21.705907   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.705918   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:21.705926   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:21.705988   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:21.753302   80857 cri.go:89] found id: ""
	I0717 18:43:21.753323   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.753330   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:21.753337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:21.753388   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:21.785363   80857 cri.go:89] found id: ""
	I0717 18:43:21.785390   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.785396   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:21.785402   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:21.785448   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:21.817517   80857 cri.go:89] found id: ""
	I0717 18:43:21.817545   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.817553   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:21.817560   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:21.817615   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:21.849451   80857 cri.go:89] found id: ""
	I0717 18:43:21.849478   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.849489   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:21.849497   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:21.849553   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:21.880032   80857 cri.go:89] found id: ""
	I0717 18:43:21.880055   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.880063   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:21.880073   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:21.880086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:21.928498   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:21.928530   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:21.941532   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:21.941565   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:22.014044   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:22.014066   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:22.014081   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:22.090789   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:22.090817   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:24.628401   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:24.643571   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:24.643642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:24.679262   80857 cri.go:89] found id: ""
	I0717 18:43:24.679288   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.679297   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:24.679303   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:24.679360   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:24.713043   80857 cri.go:89] found id: ""
	I0717 18:43:24.713073   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.713085   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:24.713092   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:24.713145   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:24.751459   80857 cri.go:89] found id: ""
	I0717 18:43:24.751496   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.751508   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:24.751518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:24.751584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:24.790793   80857 cri.go:89] found id: ""
	I0717 18:43:24.790820   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.790831   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:24.790838   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:24.790895   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:24.822909   80857 cri.go:89] found id: ""
	I0717 18:43:24.822936   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.822945   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:24.822953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:24.823016   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:24.855369   80857 cri.go:89] found id: ""
	I0717 18:43:24.855418   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.855455   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:24.855468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:24.855557   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:24.891080   80857 cri.go:89] found id: ""
	I0717 18:43:24.891110   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.891127   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:24.891133   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:24.891187   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:24.923679   80857 cri.go:89] found id: ""
	I0717 18:43:24.923812   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.923833   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:24.923847   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:24.923863   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:24.975469   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:24.975499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:24.988671   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:24.988702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:25.055191   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:25.055210   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:25.055223   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:25.138867   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:25.138900   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:27.678822   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:27.691422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:27.691483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:27.723979   80857 cri.go:89] found id: ""
	I0717 18:43:27.724008   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.724016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:27.724022   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:27.724067   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:27.756389   80857 cri.go:89] found id: ""
	I0717 18:43:27.756415   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.756423   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:27.756429   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:27.756476   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:27.787617   80857 cri.go:89] found id: ""
	I0717 18:43:27.787644   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.787652   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:27.787658   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:27.787705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:27.821688   80857 cri.go:89] found id: ""
	I0717 18:43:27.821716   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.821725   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:27.821732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:27.821787   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:27.855353   80857 cri.go:89] found id: ""
	I0717 18:43:27.855378   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.855386   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:27.855392   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:27.855439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:27.887885   80857 cri.go:89] found id: ""
	I0717 18:43:27.887909   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.887917   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:27.887923   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:27.887984   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:27.918797   80857 cri.go:89] found id: ""
	I0717 18:43:27.918820   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.918828   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:27.918833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:27.918884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:27.951255   80857 cri.go:89] found id: ""
	I0717 18:43:27.951283   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.951295   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:27.951306   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:27.951319   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:28.025476   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:28.025506   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:28.063994   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:28.064020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:28.117762   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:28.117805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:28.135688   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:28.135725   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:28.238770   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:30.739930   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:30.754147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:30.754231   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:30.794454   80857 cri.go:89] found id: ""
	I0717 18:43:30.794479   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.794486   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:30.794491   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:30.794548   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:30.831643   80857 cri.go:89] found id: ""
	I0717 18:43:30.831666   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.831673   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:30.831678   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:30.831731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:30.863293   80857 cri.go:89] found id: ""
	I0717 18:43:30.863315   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.863323   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:30.863337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:30.863395   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:30.897830   80857 cri.go:89] found id: ""
	I0717 18:43:30.897859   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.897870   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:30.897877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:30.897929   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:30.933179   80857 cri.go:89] found id: ""
	I0717 18:43:30.933209   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.933220   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:30.933227   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:30.933289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:30.964730   80857 cri.go:89] found id: ""
	I0717 18:43:30.964759   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.964773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:30.964781   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:30.964825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:30.996330   80857 cri.go:89] found id: ""
	I0717 18:43:30.996353   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.996361   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:30.996367   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:30.996419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:31.028193   80857 cri.go:89] found id: ""
	I0717 18:43:31.028220   80857 logs.go:276] 0 containers: []
	W0717 18:43:31.028228   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:31.028237   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:31.028251   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:31.040465   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:31.040490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:31.108127   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:31.108150   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:31.108164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:31.187763   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:31.187797   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:31.224238   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:31.224266   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:33.776145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:33.790045   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:33.790108   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:33.823471   80857 cri.go:89] found id: ""
	I0717 18:43:33.823495   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.823505   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:33.823512   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:33.823568   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:33.860205   80857 cri.go:89] found id: ""
	I0717 18:43:33.860233   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.860243   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:33.860250   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:33.860298   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:33.895469   80857 cri.go:89] found id: ""
	I0717 18:43:33.895499   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.895509   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:33.895516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:33.895578   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:33.938483   80857 cri.go:89] found id: ""
	I0717 18:43:33.938517   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.938527   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:33.938534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:33.938596   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:33.973265   80857 cri.go:89] found id: ""
	I0717 18:43:33.973293   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.973303   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:33.973309   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:33.973382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:34.012669   80857 cri.go:89] found id: ""
	I0717 18:43:34.012696   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.012704   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:34.012710   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:34.012760   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:34.045522   80857 cri.go:89] found id: ""
	I0717 18:43:34.045547   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.045557   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:34.045564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:34.045636   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:34.082927   80857 cri.go:89] found id: ""
	I0717 18:43:34.082957   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.082968   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:34.082979   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:34.082993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:34.134133   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:34.134168   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:34.146814   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:34.146837   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:34.217050   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:34.217079   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:34.217094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:34.298572   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:34.298610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:36.838187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:36.850888   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:36.850948   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:36.883132   80857 cri.go:89] found id: ""
	I0717 18:43:36.883153   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.883160   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:36.883166   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:36.883209   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:36.918310   80857 cri.go:89] found id: ""
	I0717 18:43:36.918339   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.918348   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:36.918353   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:36.918411   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:36.949794   80857 cri.go:89] found id: ""
	I0717 18:43:36.949818   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.949825   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:36.949831   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:36.949889   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:36.980913   80857 cri.go:89] found id: ""
	I0717 18:43:36.980951   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.980962   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:36.980969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:36.981029   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:37.014295   80857 cri.go:89] found id: ""
	I0717 18:43:37.014322   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.014330   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:37.014336   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:37.014397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:37.048555   80857 cri.go:89] found id: ""
	I0717 18:43:37.048581   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.048589   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:37.048595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:37.048643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:37.080533   80857 cri.go:89] found id: ""
	I0717 18:43:37.080561   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.080571   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:37.080577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:37.080640   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:37.112919   80857 cri.go:89] found id: ""
	I0717 18:43:37.112952   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.112963   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:37.112973   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:37.112987   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:37.165012   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:37.165044   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:37.177860   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:37.177881   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:37.244776   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:37.244806   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:37.244824   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:37.322949   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:37.322976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:39.861056   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:39.884509   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:39.884592   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:39.931317   80857 cri.go:89] found id: ""
	I0717 18:43:39.931341   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.931348   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:39.931354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:39.931410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:39.971571   80857 cri.go:89] found id: ""
	I0717 18:43:39.971615   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.971626   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:39.971634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:39.971692   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:40.003851   80857 cri.go:89] found id: ""
	I0717 18:43:40.003875   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.003883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:40.003891   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:40.003942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:40.040403   80857 cri.go:89] found id: ""
	I0717 18:43:40.040430   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.040440   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:40.040445   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:40.040498   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:40.071893   80857 cri.go:89] found id: ""
	I0717 18:43:40.071919   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.071927   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:40.071932   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:40.071979   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:40.111020   80857 cri.go:89] found id: ""
	I0717 18:43:40.111042   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.111052   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:40.111059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:40.111117   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:40.142872   80857 cri.go:89] found id: ""
	I0717 18:43:40.142899   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.142910   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:40.142917   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:40.142975   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:40.179919   80857 cri.go:89] found id: ""
	I0717 18:43:40.179944   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.179953   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:40.179963   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:40.179980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:40.233033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:40.233075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:40.246272   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:40.246299   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:40.311988   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:40.312014   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:40.312033   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:40.395622   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:40.395658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:42.935843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:42.949893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:42.949957   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:42.982429   80857 cri.go:89] found id: ""
	I0717 18:43:42.982451   80857 logs.go:276] 0 containers: []
	W0717 18:43:42.982459   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:42.982464   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:42.982512   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:43.018637   80857 cri.go:89] found id: ""
	I0717 18:43:43.018659   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.018666   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:43.018672   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:43.018719   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:43.054274   80857 cri.go:89] found id: ""
	I0717 18:43:43.054301   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.054310   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:43.054317   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:43.054368   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:43.093382   80857 cri.go:89] found id: ""
	I0717 18:43:43.093408   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.093418   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:43.093425   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:43.093484   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:43.125830   80857 cri.go:89] found id: ""
	I0717 18:43:43.125862   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.125871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:43.125878   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:43.125936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:43.157110   80857 cri.go:89] found id: ""
	I0717 18:43:43.157138   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.157147   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:43.157154   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:43.157215   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:43.188320   80857 cri.go:89] found id: ""
	I0717 18:43:43.188342   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.188349   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:43.188354   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:43.188400   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:43.220650   80857 cri.go:89] found id: ""
	I0717 18:43:43.220679   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.220686   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:43.220695   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:43.220707   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:43.259320   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:43.259358   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:43.308308   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:43.308346   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:43.321865   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:43.321894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:43.396110   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:43.396135   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:43.396147   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:45.976091   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:45.988956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:45.989015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:46.022277   80857 cri.go:89] found id: ""
	I0717 18:43:46.022307   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.022318   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:46.022325   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:46.022398   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:46.057607   80857 cri.go:89] found id: ""
	I0717 18:43:46.057636   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.057646   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:46.057653   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:46.057712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:46.089275   80857 cri.go:89] found id: ""
	I0717 18:43:46.089304   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.089313   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:46.089321   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:46.089378   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:46.123686   80857 cri.go:89] found id: ""
	I0717 18:43:46.123717   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.123726   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:46.123731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:46.123784   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:46.166600   80857 cri.go:89] found id: ""
	I0717 18:43:46.166628   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.166638   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:46.166645   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:46.166704   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:46.202518   80857 cri.go:89] found id: ""
	I0717 18:43:46.202543   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.202562   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:46.202568   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:46.202612   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:46.234573   80857 cri.go:89] found id: ""
	I0717 18:43:46.234608   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.234620   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:46.234627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:46.234687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:46.265305   80857 cri.go:89] found id: ""
	I0717 18:43:46.265333   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.265343   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:46.265355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:46.265369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:46.342963   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:46.342993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:46.377170   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:46.377208   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:46.429641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:46.429673   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:46.442168   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:46.442195   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:46.516656   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.016877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:49.030308   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:49.030375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:49.062400   80857 cri.go:89] found id: ""
	I0717 18:43:49.062423   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.062430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:49.062435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:49.062486   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:49.097110   80857 cri.go:89] found id: ""
	I0717 18:43:49.097131   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.097137   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:49.097142   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:49.097190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:49.128535   80857 cri.go:89] found id: ""
	I0717 18:43:49.128558   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.128571   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:49.128577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:49.128626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:49.162505   80857 cri.go:89] found id: ""
	I0717 18:43:49.162530   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.162538   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:49.162544   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:49.162594   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:49.194912   80857 cri.go:89] found id: ""
	I0717 18:43:49.194939   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.194950   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:49.194957   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:49.195025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:49.227055   80857 cri.go:89] found id: ""
	I0717 18:43:49.227083   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.227092   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:49.227098   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:49.227147   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:49.259568   80857 cri.go:89] found id: ""
	I0717 18:43:49.259596   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.259607   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:49.259618   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:49.259673   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:49.291700   80857 cri.go:89] found id: ""
	I0717 18:43:49.291727   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.291735   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:49.291744   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:49.291755   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:49.344600   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:49.344636   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:49.357680   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:49.357705   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:49.427160   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.427180   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:49.427192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:49.504151   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:49.504182   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:52.041591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:52.054775   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:52.054841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:52.085858   80857 cri.go:89] found id: ""
	I0717 18:43:52.085892   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.085904   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:52.085911   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:52.085961   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:52.124100   80857 cri.go:89] found id: ""
	I0717 18:43:52.124122   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.124130   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:52.124135   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:52.124195   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:52.155056   80857 cri.go:89] found id: ""
	I0717 18:43:52.155079   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.155087   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:52.155093   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:52.155154   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:52.189318   80857 cri.go:89] found id: ""
	I0717 18:43:52.189349   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.189359   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:52.189366   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:52.189430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:52.222960   80857 cri.go:89] found id: ""
	I0717 18:43:52.222988   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.222999   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:52.223006   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:52.223071   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:52.255807   80857 cri.go:89] found id: ""
	I0717 18:43:52.255834   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.255841   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:52.255847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:52.255904   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:52.286596   80857 cri.go:89] found id: ""
	I0717 18:43:52.286628   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.286641   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:52.286648   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:52.286703   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:52.319607   80857 cri.go:89] found id: ""
	I0717 18:43:52.319632   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.319641   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:52.319652   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:52.319666   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:52.371270   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:52.371301   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:52.384771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:52.384803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:52.456408   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:52.456432   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:52.456444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:52.533724   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:52.533759   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:55.072554   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:55.087005   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:55.087086   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:55.123300   80857 cri.go:89] found id: ""
	I0717 18:43:55.123325   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.123331   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:55.123336   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:55.123390   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:55.158476   80857 cri.go:89] found id: ""
	I0717 18:43:55.158502   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.158509   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:55.158515   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:55.158572   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:55.198489   80857 cri.go:89] found id: ""
	I0717 18:43:55.198511   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.198518   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:55.198524   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:55.198567   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:55.230901   80857 cri.go:89] found id: ""
	I0717 18:43:55.230933   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.230943   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:55.230951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:55.231028   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:55.262303   80857 cri.go:89] found id: ""
	I0717 18:43:55.262326   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.262333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:55.262340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:55.262393   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:55.293889   80857 cri.go:89] found id: ""
	I0717 18:43:55.293916   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.293925   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:55.293930   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:55.293983   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:55.325695   80857 cri.go:89] found id: ""
	I0717 18:43:55.325720   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.325727   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:55.325737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:55.325797   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:55.360021   80857 cri.go:89] found id: ""
	I0717 18:43:55.360044   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.360052   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:55.360059   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:55.360075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:55.372088   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:55.372111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:55.442073   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:55.442101   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:55.442116   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:55.521733   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:55.521763   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:55.558914   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:55.558947   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.114001   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:58.126283   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:58.126353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:58.162769   80857 cri.go:89] found id: ""
	I0717 18:43:58.162800   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.162810   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:58.162815   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:58.162862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:58.197359   80857 cri.go:89] found id: ""
	I0717 18:43:58.197386   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.197397   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:58.197404   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:58.197465   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:58.229662   80857 cri.go:89] found id: ""
	I0717 18:43:58.229691   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.229700   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:58.229707   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:58.229766   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:58.261810   80857 cri.go:89] found id: ""
	I0717 18:43:58.261832   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.261838   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:58.261844   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:58.261900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:58.293243   80857 cri.go:89] found id: ""
	I0717 18:43:58.293271   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.293282   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:58.293290   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:58.293353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:58.325689   80857 cri.go:89] found id: ""
	I0717 18:43:58.325714   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.325724   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:58.325731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:58.325785   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:58.357381   80857 cri.go:89] found id: ""
	I0717 18:43:58.357406   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.357416   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:58.357422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:58.357483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:58.389859   80857 cri.go:89] found id: ""
	I0717 18:43:58.389888   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.389900   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:58.389910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:58.389926   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:58.458034   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:58.458058   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:58.458072   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:58.536134   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:58.536164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:58.573808   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:58.573834   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.624956   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:58.624985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:01.138486   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:01.151547   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:01.151610   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:01.186397   80857 cri.go:89] found id: ""
	I0717 18:44:01.186422   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.186430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:01.186435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:01.186487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:01.220797   80857 cri.go:89] found id: ""
	I0717 18:44:01.220822   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.220830   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:01.220849   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:01.220894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:01.257640   80857 cri.go:89] found id: ""
	I0717 18:44:01.257666   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.257674   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:01.257680   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:01.257727   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:01.295393   80857 cri.go:89] found id: ""
	I0717 18:44:01.295418   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.295425   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:01.295432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:01.295493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:01.327242   80857 cri.go:89] found id: ""
	I0717 18:44:01.327261   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.327268   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:01.327273   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:01.327319   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:01.358559   80857 cri.go:89] found id: ""
	I0717 18:44:01.358586   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.358593   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:01.358599   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:01.358647   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:01.392301   80857 cri.go:89] found id: ""
	I0717 18:44:01.392332   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.392341   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:01.392346   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:01.392407   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:01.424422   80857 cri.go:89] found id: ""
	I0717 18:44:01.424449   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.424457   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:01.424465   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:01.424477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:01.473298   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:01.473332   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:01.487444   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:01.487471   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:01.552548   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:01.552572   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:01.552586   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:01.634203   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:01.634242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:04.175618   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:04.188071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:04.188150   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:04.222149   80857 cri.go:89] found id: ""
	I0717 18:44:04.222173   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.222180   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:04.222185   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:04.222242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:04.257174   80857 cri.go:89] found id: ""
	I0717 18:44:04.257211   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.257223   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:04.257232   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:04.257284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:04.291628   80857 cri.go:89] found id: ""
	I0717 18:44:04.291653   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.291666   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:04.291673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:04.291733   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:04.325935   80857 cri.go:89] found id: ""
	I0717 18:44:04.325964   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.325975   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:04.325982   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:04.326043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:04.356610   80857 cri.go:89] found id: ""
	I0717 18:44:04.356638   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.356648   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:04.356655   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:04.356712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:04.387728   80857 cri.go:89] found id: ""
	I0717 18:44:04.387764   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.387773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:04.387782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:04.387840   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:04.421452   80857 cri.go:89] found id: ""
	I0717 18:44:04.421479   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.421488   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:04.421495   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:04.421555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:04.453111   80857 cri.go:89] found id: ""
	I0717 18:44:04.453139   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.453150   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:04.453161   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:04.453175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:04.506185   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:04.506215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:04.523611   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:04.523638   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:04.591051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:04.591074   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:04.591091   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:04.666603   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:04.666647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:07.205208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:07.218182   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:07.218236   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:07.254521   80857 cri.go:89] found id: ""
	I0717 18:44:07.254554   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.254565   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:07.254571   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:07.254638   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:07.293622   80857 cri.go:89] found id: ""
	I0717 18:44:07.293650   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.293658   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:07.293663   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:07.293711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:07.331056   80857 cri.go:89] found id: ""
	I0717 18:44:07.331083   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.331091   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:07.331097   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:07.331157   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:07.368445   80857 cri.go:89] found id: ""
	I0717 18:44:07.368476   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.368484   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:07.368491   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:07.368541   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:07.405507   80857 cri.go:89] found id: ""
	I0717 18:44:07.405539   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.405550   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:07.405557   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:07.405617   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:07.444752   80857 cri.go:89] found id: ""
	I0717 18:44:07.444782   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.444792   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:07.444801   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:07.444859   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:07.486976   80857 cri.go:89] found id: ""
	I0717 18:44:07.487006   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.487016   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:07.487024   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:07.487073   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:07.522561   80857 cri.go:89] found id: ""
	I0717 18:44:07.522590   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.522599   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:07.522607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:07.522618   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:07.576350   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:07.576382   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:07.591491   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:07.591517   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:07.659860   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:07.659886   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:07.659902   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:07.743445   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:07.743478   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:10.284468   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:10.296549   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:10.296608   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:10.331209   80857 cri.go:89] found id: ""
	I0717 18:44:10.331236   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.331246   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:10.331252   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:10.331297   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:10.363911   80857 cri.go:89] found id: ""
	I0717 18:44:10.363941   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.363949   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:10.363954   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:10.364001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:10.395935   80857 cri.go:89] found id: ""
	I0717 18:44:10.395960   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.395970   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:10.395977   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:10.396021   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:10.428307   80857 cri.go:89] found id: ""
	I0717 18:44:10.428337   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.428344   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:10.428351   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:10.428397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:10.459615   80857 cri.go:89] found id: ""
	I0717 18:44:10.459643   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.459654   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:10.459661   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:10.459715   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:10.491593   80857 cri.go:89] found id: ""
	I0717 18:44:10.491617   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.491628   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:10.491636   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:10.491693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:10.526822   80857 cri.go:89] found id: ""
	I0717 18:44:10.526846   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.526853   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:10.526858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:10.526918   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:10.561037   80857 cri.go:89] found id: ""
	I0717 18:44:10.561066   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.561077   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:10.561087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:10.561101   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:10.643333   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:10.643364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:10.684673   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:10.684704   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:10.736191   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:10.736220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:10.748762   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:10.748793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:10.812121   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.313033   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:13.325692   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:13.325756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:13.358306   80857 cri.go:89] found id: ""
	I0717 18:44:13.358336   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.358345   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:13.358352   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:13.358410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:13.393233   80857 cri.go:89] found id: ""
	I0717 18:44:13.393264   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.393274   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:13.393282   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:13.393340   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:13.424256   80857 cri.go:89] found id: ""
	I0717 18:44:13.424287   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.424298   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:13.424305   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:13.424358   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:13.454988   80857 cri.go:89] found id: ""
	I0717 18:44:13.455010   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.455018   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:13.455023   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:13.455069   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:13.491019   80857 cri.go:89] found id: ""
	I0717 18:44:13.491046   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.491054   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:13.491060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:13.491107   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:13.523045   80857 cri.go:89] found id: ""
	I0717 18:44:13.523070   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.523079   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:13.523085   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:13.523131   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:13.555442   80857 cri.go:89] found id: ""
	I0717 18:44:13.555470   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.555483   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:13.555489   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:13.555549   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:13.588891   80857 cri.go:89] found id: ""
	I0717 18:44:13.588921   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.588931   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:13.588958   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:13.588973   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:13.663635   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.663659   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:13.663674   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:13.749098   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:13.749135   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:13.785489   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:13.785524   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:13.837098   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:13.837128   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:16.350571   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:16.364398   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:16.364470   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:16.400677   80857 cri.go:89] found id: ""
	I0717 18:44:16.400708   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.400719   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:16.400726   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:16.400781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:16.431715   80857 cri.go:89] found id: ""
	I0717 18:44:16.431743   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.431754   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:16.431760   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:16.431836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:16.465115   80857 cri.go:89] found id: ""
	I0717 18:44:16.465148   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.465160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:16.465167   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:16.465230   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:16.497906   80857 cri.go:89] found id: ""
	I0717 18:44:16.497933   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.497944   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:16.497952   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:16.498008   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:16.534066   80857 cri.go:89] found id: ""
	I0717 18:44:16.534097   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.534108   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:16.534116   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:16.534173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:16.566679   80857 cri.go:89] found id: ""
	I0717 18:44:16.566706   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.566717   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:16.566724   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:16.566781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:16.598397   80857 cri.go:89] found id: ""
	I0717 18:44:16.598416   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.598422   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:16.598427   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:16.598480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:16.629943   80857 cri.go:89] found id: ""
	I0717 18:44:16.629975   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.629998   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:16.630017   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:16.630032   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:16.706452   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:16.706489   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:16.744971   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:16.745003   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:16.796450   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:16.796477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:16.809192   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:16.809217   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:16.875699   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.376821   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:19.389921   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:19.389980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:19.423837   80857 cri.go:89] found id: ""
	I0717 18:44:19.423862   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.423870   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:19.423877   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:19.423934   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:19.468267   80857 cri.go:89] found id: ""
	I0717 18:44:19.468293   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.468305   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:19.468311   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:19.468371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:19.503286   80857 cri.go:89] found id: ""
	I0717 18:44:19.503315   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.503326   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:19.503333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:19.503391   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:19.535505   80857 cri.go:89] found id: ""
	I0717 18:44:19.535531   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.535542   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:19.535548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:19.535607   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:19.568678   80857 cri.go:89] found id: ""
	I0717 18:44:19.568704   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.568711   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:19.568717   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:19.568762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:19.604027   80857 cri.go:89] found id: ""
	I0717 18:44:19.604053   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.604064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:19.604071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:19.604127   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:19.637357   80857 cri.go:89] found id: ""
	I0717 18:44:19.637387   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.637397   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:19.637403   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:19.637450   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:19.669094   80857 cri.go:89] found id: ""
	I0717 18:44:19.669126   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.669136   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:19.669145   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:19.669160   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:19.720218   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:19.720248   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:19.733320   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:19.733343   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:19.796229   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.796252   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:19.796267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:19.871157   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:19.871186   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:22.409012   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:22.421477   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:22.421546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:22.457314   80857 cri.go:89] found id: ""
	I0717 18:44:22.457337   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.457346   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:22.457354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:22.457410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:22.490998   80857 cri.go:89] found id: ""
	I0717 18:44:22.491022   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.491030   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:22.491037   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:22.491090   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:22.523904   80857 cri.go:89] found id: ""
	I0717 18:44:22.523934   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.523945   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:22.523953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:22.524012   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:22.555917   80857 cri.go:89] found id: ""
	I0717 18:44:22.555947   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.555956   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:22.555962   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:22.556026   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:22.588510   80857 cri.go:89] found id: ""
	I0717 18:44:22.588552   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.588565   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:22.588574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:22.588652   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:22.621854   80857 cri.go:89] found id: ""
	I0717 18:44:22.621883   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.621893   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:22.621901   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:22.621956   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:22.653897   80857 cri.go:89] found id: ""
	I0717 18:44:22.653921   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.653931   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:22.653938   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:22.654001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:22.685731   80857 cri.go:89] found id: ""
	I0717 18:44:22.685760   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.685770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:22.685779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:22.685792   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:22.735514   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:22.735545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:22.748148   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:22.748169   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:22.809637   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:22.809666   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:22.809682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:22.886014   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:22.886050   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:25.431906   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:25.444866   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:25.444965   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:25.477211   80857 cri.go:89] found id: ""
	I0717 18:44:25.477245   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.477257   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:25.477264   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:25.477366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:25.512077   80857 cri.go:89] found id: ""
	I0717 18:44:25.512108   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.512120   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:25.512127   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:25.512177   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:25.543953   80857 cri.go:89] found id: ""
	I0717 18:44:25.543974   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.543981   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:25.543987   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:25.544032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:25.574955   80857 cri.go:89] found id: ""
	I0717 18:44:25.574980   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.574990   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:25.574997   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:25.575054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:25.607078   80857 cri.go:89] found id: ""
	I0717 18:44:25.607106   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.607117   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:25.607125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:25.607188   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:25.643129   80857 cri.go:89] found id: ""
	I0717 18:44:25.643152   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.643162   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:25.643169   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:25.643225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:25.678220   80857 cri.go:89] found id: ""
	I0717 18:44:25.678241   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.678249   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:25.678254   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:25.678309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:25.715405   80857 cri.go:89] found id: ""
	I0717 18:44:25.715433   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.715446   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:25.715458   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:25.715474   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:25.772978   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:25.773008   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:25.786559   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:25.786587   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:25.853369   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:25.853386   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:25.853398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:25.954346   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:25.954398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:28.498591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:28.511701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:28.511762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:28.543527   80857 cri.go:89] found id: ""
	I0717 18:44:28.543551   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.543559   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:28.543565   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:28.543624   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:28.574737   80857 cri.go:89] found id: ""
	I0717 18:44:28.574762   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.574769   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:28.574776   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:28.574835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:28.608129   80857 cri.go:89] found id: ""
	I0717 18:44:28.608166   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.608174   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:28.608179   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:28.608234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:28.644324   80857 cri.go:89] found id: ""
	I0717 18:44:28.644348   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.644357   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:28.644371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:28.644426   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:28.675830   80857 cri.go:89] found id: ""
	I0717 18:44:28.675859   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.675870   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:28.675877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:28.675937   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:28.705713   80857 cri.go:89] found id: ""
	I0717 18:44:28.705749   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.705760   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:28.705768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:28.705821   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:28.738648   80857 cri.go:89] found id: ""
	I0717 18:44:28.738677   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.738688   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:28.738695   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:28.738752   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:28.768877   80857 cri.go:89] found id: ""
	I0717 18:44:28.768906   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.768916   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:28.768927   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:28.768953   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:28.818951   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:28.818985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:28.832813   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:28.832843   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:28.910030   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:28.910051   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:28.910063   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:28.986706   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:28.986743   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:31.529154   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:31.543261   80857 kubeadm.go:597] duration metric: took 4m4.346231712s to restartPrimaryControlPlane
	W0717 18:44:31.543327   80857 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:44:31.543350   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:44:36.752008   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.208633612s)
	I0717 18:44:36.752076   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:44:36.765411   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:44:36.774556   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:44:36.783406   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:44:36.783427   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:44:36.783479   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:44:36.791953   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:44:36.792007   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:44:36.800929   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:44:36.808988   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:44:36.809049   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:44:36.817312   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.825586   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:44:36.825648   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.834783   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:44:36.843109   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:44:36.843166   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:44:36.852276   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:44:37.058251   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:46:33.124646   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:46:33.124790   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:46:33.126245   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.126307   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.126409   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.126547   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.126673   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:33.126734   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:33.128541   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:33.128626   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:33.128707   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:33.128817   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:33.128901   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:33.129018   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:33.129091   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:33.129172   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:33.129249   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:33.129339   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:33.129408   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:33.129444   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:33.129532   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:33.129603   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:33.129665   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:33.129765   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:33.129812   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:33.129929   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:33.130037   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:33.130093   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:33.130177   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:33.131546   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:33.131652   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:33.131750   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:33.131858   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:33.131939   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:33.132085   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:46:33.132133   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:46:33.132189   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132355   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132419   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132585   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132657   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132839   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132900   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133143   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133248   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133452   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133460   80857 kubeadm.go:310] 
	I0717 18:46:33.133494   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:46:33.133529   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:46:33.133535   80857 kubeadm.go:310] 
	I0717 18:46:33.133564   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:46:33.133599   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:46:33.133727   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:46:33.133752   80857 kubeadm.go:310] 
	I0717 18:46:33.133905   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:46:33.133947   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:46:33.134002   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:46:33.134012   80857 kubeadm.go:310] 
	I0717 18:46:33.134116   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:46:33.134186   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:46:33.134193   80857 kubeadm.go:310] 
	I0717 18:46:33.134290   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:46:33.134367   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:46:33.134431   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:46:33.134491   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:46:33.134533   80857 kubeadm.go:310] 
	W0717 18:46:33.134615   80857 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 18:46:33.134669   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:46:33.590879   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:33.605393   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:46:33.614382   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:46:33.614405   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:46:33.614450   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:46:33.622849   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:46:33.622905   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:46:33.631852   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:46:33.640160   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:46:33.640211   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:46:33.648774   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.656740   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:46:33.656796   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.665799   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:46:33.674492   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:46:33.674547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:46:33.683627   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:46:33.746405   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.746472   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.881152   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.881297   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.881443   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:34.053199   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:34.055757   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:34.055843   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:34.055918   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:34.056030   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:34.056129   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:34.056232   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:34.056336   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:34.056431   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:34.056524   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:34.056656   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:34.056764   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:34.056824   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:34.056900   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:34.276456   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:34.491418   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:34.702265   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:34.874511   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:34.895484   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:34.896451   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:34.896536   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:35.040208   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:35.042291   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:35.042437   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:35.042565   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:35.044391   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:35.046206   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:35.050843   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:47:15.053070   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:47:15.053416   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:15.053586   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:20.053963   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:20.054207   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:30.054801   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:30.055011   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:50.055270   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:50.055465   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.053919   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:48:30.054133   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.054148   80857 kubeadm.go:310] 
	I0717 18:48:30.054231   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:48:30.054300   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:48:30.054326   80857 kubeadm.go:310] 
	I0717 18:48:30.054386   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:48:30.054443   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:48:30.054581   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:48:30.054593   80857 kubeadm.go:310] 
	I0717 18:48:30.054715   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:48:30.054761   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:48:30.054810   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:48:30.054818   80857 kubeadm.go:310] 
	I0717 18:48:30.054970   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:48:30.055069   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:48:30.055081   80857 kubeadm.go:310] 
	I0717 18:48:30.055236   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:48:30.055332   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:48:30.055396   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:48:30.055457   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:48:30.055483   80857 kubeadm.go:310] 
	I0717 18:48:30.056139   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:48:30.056246   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:48:30.056338   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:48:30.056413   80857 kubeadm.go:394] duration metric: took 8m2.908780359s to StartCluster
	I0717 18:48:30.056461   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:48:30.056524   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:48:30.102640   80857 cri.go:89] found id: ""
	I0717 18:48:30.102662   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.102669   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:48:30.102674   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:48:30.102724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:48:30.142516   80857 cri.go:89] found id: ""
	I0717 18:48:30.142548   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.142559   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:48:30.142567   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:48:30.142630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:48:30.178558   80857 cri.go:89] found id: ""
	I0717 18:48:30.178589   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.178598   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:48:30.178604   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:48:30.178677   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:48:30.211146   80857 cri.go:89] found id: ""
	I0717 18:48:30.211177   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.211186   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:48:30.211192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:48:30.211242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:48:30.244287   80857 cri.go:89] found id: ""
	I0717 18:48:30.244308   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.244314   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:48:30.244319   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:48:30.244364   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:48:30.274547   80857 cri.go:89] found id: ""
	I0717 18:48:30.274577   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.274587   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:48:30.274594   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:48:30.274660   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:48:30.306796   80857 cri.go:89] found id: ""
	I0717 18:48:30.306825   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.306835   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:48:30.306842   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:48:30.306903   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:48:30.341938   80857 cri.go:89] found id: ""
	I0717 18:48:30.341962   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.341972   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:48:30.341982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:48:30.341997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:48:30.407881   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:48:30.407925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:48:30.430885   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:48:30.430913   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:48:30.525366   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:48:30.525394   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:48:30.525408   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:48:30.639556   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:48:30.639588   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 18:48:30.677493   80857 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 18:48:30.677544   80857 out.go:239] * 
	* 
	W0717 18:48:30.677604   80857 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.677636   80857 out.go:239] * 
	* 
	W0717 18:48:30.678483   80857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:48:30.681792   80857 out.go:177] 
	W0717 18:48:30.682976   80857 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.683034   80857 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 18:48:30.683050   80857 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 18:48:30.684325   80857 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-019549 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549: exit status 2 (220.750538ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-019549 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-019549 logs -n 25: (1.552887065s)
E0717 18:48:32.851134   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-527415            | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-371172                                        | pause-371172                 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-341716 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | disable-driver-mounts-341716                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:34 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-066175             | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC | 17 Jul 24 18:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-066175                                   | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-022930  | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC | 17 Jul 24 18:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-527415                 | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-019549        | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-066175                  | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-066175 --memory=2200                     | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:45 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-019549             | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-022930       | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC | 17 Jul 24 18:45 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:37:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:37:14.473404   81068 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:37:14.473526   81068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:14.473535   81068 out.go:304] Setting ErrFile to fd 2...
	I0717 18:37:14.473540   81068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:14.473714   81068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:37:14.474251   81068 out.go:298] Setting JSON to false
	I0717 18:37:14.475115   81068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8377,"bootTime":1721233057,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:37:14.475172   81068 start.go:139] virtualization: kvm guest
	I0717 18:37:14.477356   81068 out.go:177] * [default-k8s-diff-port-022930] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:37:14.478600   81068 notify.go:220] Checking for updates...
	I0717 18:37:14.478615   81068 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:37:14.480094   81068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:37:14.481516   81068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:37:14.482886   81068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:37:14.484159   81068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:37:14.485449   81068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:37:14.487164   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:37:14.487744   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:14.487795   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:14.502368   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0717 18:37:14.502712   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:14.503192   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:37:14.503213   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:14.503574   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:14.503778   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:37:14.504032   81068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:37:14.504326   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:14.504381   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:14.518330   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0717 18:37:14.518718   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:14.519095   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:37:14.519114   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:14.519409   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:14.519578   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:37:14.549923   81068 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:37:14.551160   81068 start.go:297] selected driver: kvm2
	I0717 18:37:14.551175   81068 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:37:14.551302   81068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:37:14.551931   81068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:37:14.552008   81068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:37:14.566038   81068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:37:14.566371   81068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:37:14.566443   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:37:14.566466   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:37:14.566516   81068 start.go:340] cluster config:
	{Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:37:14.566643   81068 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:37:14.568602   81068 out.go:177] * Starting "default-k8s-diff-port-022930" primary control-plane node in "default-k8s-diff-port-022930" cluster
	I0717 18:37:13.057187   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:16.129274   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:14.569868   81068 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:37:14.569908   81068 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:37:14.569919   81068 cache.go:56] Caching tarball of preloaded images
	I0717 18:37:14.569992   81068 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:37:14.570003   81068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:37:14.570100   81068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/config.json ...
	I0717 18:37:14.570277   81068 start.go:360] acquireMachinesLock for default-k8s-diff-port-022930: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:37:22.209207   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:25.281226   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:31.361221   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:34.433258   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:40.513234   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:43.585225   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:49.665198   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:52.737256   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:58.817201   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:01.889213   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:07.969247   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:11.041264   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:17.121227   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:20.193250   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:26.273206   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:29.345193   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:35.425259   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:38.497261   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:44.577185   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:47.649306   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:53.729234   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:56.801257   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:02.881239   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:05.953258   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:12.033251   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:15.105230   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:21.185200   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:24.257195   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:30.337181   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:33.409224   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:39.489219   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:42.561250   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:45.565739   80401 start.go:364] duration metric: took 4m11.345351864s to acquireMachinesLock for "no-preload-066175"
	I0717 18:39:45.565801   80401 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:39:45.565807   80401 fix.go:54] fixHost starting: 
	I0717 18:39:45.566167   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:39:45.566198   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:39:45.580996   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45665
	I0717 18:39:45.581389   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:39:45.581797   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:39:45.581817   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:39:45.582145   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:39:45.582323   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:39:45.582467   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:39:45.584074   80401 fix.go:112] recreateIfNeeded on no-preload-066175: state=Stopped err=<nil>
	I0717 18:39:45.584109   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	W0717 18:39:45.584260   80401 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:39:45.586842   80401 out.go:177] * Restarting existing kvm2 VM for "no-preload-066175" ...
	I0717 18:39:45.563046   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:39:45.563105   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:39:45.563521   80180 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:39:45.563555   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:39:45.563758   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:39:45.565594   80180 machine.go:97] duration metric: took 4m37.427146226s to provisionDockerMachine
	I0717 18:39:45.565643   80180 fix.go:56] duration metric: took 4m37.448013968s for fixHost
	I0717 18:39:45.565651   80180 start.go:83] releasing machines lock for "embed-certs-527415", held for 4m37.448033785s
	W0717 18:39:45.565675   80180 start.go:714] error starting host: provision: host is not running
	W0717 18:39:45.565775   80180 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 18:39:45.565784   80180 start.go:729] Will try again in 5 seconds ...
	I0717 18:39:45.587901   80401 main.go:141] libmachine: (no-preload-066175) Calling .Start
	I0717 18:39:45.588046   80401 main.go:141] libmachine: (no-preload-066175) Ensuring networks are active...
	I0717 18:39:45.588666   80401 main.go:141] libmachine: (no-preload-066175) Ensuring network default is active
	I0717 18:39:45.589012   80401 main.go:141] libmachine: (no-preload-066175) Ensuring network mk-no-preload-066175 is active
	I0717 18:39:45.589386   80401 main.go:141] libmachine: (no-preload-066175) Getting domain xml...
	I0717 18:39:45.589959   80401 main.go:141] libmachine: (no-preload-066175) Creating domain...
	I0717 18:39:46.785717   80401 main.go:141] libmachine: (no-preload-066175) Waiting to get IP...
	I0717 18:39:46.786495   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:46.786912   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:46.786974   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:46.786888   81612 retry.go:31] will retry after 301.458026ms: waiting for machine to come up
	I0717 18:39:47.090556   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.091129   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.091154   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.091098   81612 retry.go:31] will retry after 347.107185ms: waiting for machine to come up
	I0717 18:39:47.439530   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.440010   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.440033   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.439947   81612 retry.go:31] will retry after 436.981893ms: waiting for machine to come up
	I0717 18:39:47.878684   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.879091   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.879120   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.879051   81612 retry.go:31] will retry after 582.942833ms: waiting for machine to come up
	I0717 18:39:48.464068   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:48.464568   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:48.464593   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:48.464513   81612 retry.go:31] will retry after 633.101908ms: waiting for machine to come up
	I0717 18:39:49.099383   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:49.099762   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:49.099784   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:49.099720   81612 retry.go:31] will retry after 847.181679ms: waiting for machine to come up
	I0717 18:39:50.567294   80180 start.go:360] acquireMachinesLock for embed-certs-527415: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:39:49.948696   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:49.949228   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:49.949260   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:49.949188   81612 retry.go:31] will retry after 1.048891217s: waiting for machine to come up
	I0717 18:39:50.999658   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:51.000062   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:51.000099   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:51.000001   81612 retry.go:31] will retry after 942.285454ms: waiting for machine to come up
	I0717 18:39:51.944171   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:51.944676   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:51.944702   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:51.944632   81612 retry.go:31] will retry after 1.21768861s: waiting for machine to come up
	I0717 18:39:53.163883   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:53.164345   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:53.164368   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:53.164305   81612 retry.go:31] will retry after 1.505905193s: waiting for machine to come up
	I0717 18:39:54.671532   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:54.671951   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:54.671977   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:54.671918   81612 retry.go:31] will retry after 2.885547597s: waiting for machine to come up
	I0717 18:39:57.560375   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:57.560878   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:57.560902   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:57.560830   81612 retry.go:31] will retry after 3.53251124s: waiting for machine to come up
	I0717 18:40:02.249487   80857 start.go:364] duration metric: took 3m17.095542929s to acquireMachinesLock for "old-k8s-version-019549"
	I0717 18:40:02.249548   80857 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:02.249556   80857 fix.go:54] fixHost starting: 
	I0717 18:40:02.249946   80857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:02.249976   80857 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:02.269365   80857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0717 18:40:02.269715   80857 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:02.270182   80857 main.go:141] libmachine: Using API Version  1
	I0717 18:40:02.270205   80857 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:02.270534   80857 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:02.270738   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:02.270875   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetState
	I0717 18:40:02.272408   80857 fix.go:112] recreateIfNeeded on old-k8s-version-019549: state=Stopped err=<nil>
	I0717 18:40:02.272443   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	W0717 18:40:02.272597   80857 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:02.274702   80857 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-019549" ...
	I0717 18:40:01.094975   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.095556   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has current primary IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.095579   80401 main.go:141] libmachine: (no-preload-066175) Found IP for machine: 192.168.72.216
	I0717 18:40:01.095592   80401 main.go:141] libmachine: (no-preload-066175) Reserving static IP address...
	I0717 18:40:01.095955   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.095980   80401 main.go:141] libmachine: (no-preload-066175) DBG | skip adding static IP to network mk-no-preload-066175 - found existing host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"}
	I0717 18:40:01.095989   80401 main.go:141] libmachine: (no-preload-066175) Reserved static IP address: 192.168.72.216
	I0717 18:40:01.096000   80401 main.go:141] libmachine: (no-preload-066175) Waiting for SSH to be available...
	I0717 18:40:01.096010   80401 main.go:141] libmachine: (no-preload-066175) DBG | Getting to WaitForSSH function...
	I0717 18:40:01.098163   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.098498   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.098521   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.098631   80401 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH client type: external
	I0717 18:40:01.098657   80401 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa (-rw-------)
	I0717 18:40:01.098692   80401 main.go:141] libmachine: (no-preload-066175) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:01.098707   80401 main.go:141] libmachine: (no-preload-066175) DBG | About to run SSH command:
	I0717 18:40:01.098720   80401 main.go:141] libmachine: (no-preload-066175) DBG | exit 0
	I0717 18:40:01.216740   80401 main.go:141] libmachine: (no-preload-066175) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:01.217099   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetConfigRaw
	I0717 18:40:01.217706   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:01.220160   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.220461   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.220492   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.220656   80401 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/config.json ...
	I0717 18:40:01.220843   80401 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:01.220860   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:01.221067   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.223044   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.223347   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.223371   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.223531   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.223719   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.223864   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.223980   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.224125   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.224332   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.224345   80401 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:01.321053   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:01.321083   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.321333   80401 buildroot.go:166] provisioning hostname "no-preload-066175"
	I0717 18:40:01.321359   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.321529   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.323945   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.324269   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.324297   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.324421   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.324582   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.324724   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.324837   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.324996   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.325162   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.325175   80401 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-066175 && echo "no-preload-066175" | sudo tee /etc/hostname
	I0717 18:40:01.435003   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-066175
	
	I0717 18:40:01.435033   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.437795   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.438113   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.438155   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.438344   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.438533   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.438692   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.438803   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.438948   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.439094   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.439108   80401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-066175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-066175/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-066175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:01.540598   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:01.540631   80401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:01.540650   80401 buildroot.go:174] setting up certificates
	I0717 18:40:01.540660   80401 provision.go:84] configureAuth start
	I0717 18:40:01.540669   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.540977   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:01.543503   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.543788   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.543817   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.543907   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.545954   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.546261   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.546280   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.546415   80401 provision.go:143] copyHostCerts
	I0717 18:40:01.546483   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:01.546498   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:01.546596   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:01.546730   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:01.546743   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:01.546788   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:01.546878   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:01.546888   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:01.546921   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:01.547054   80401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.no-preload-066175 san=[127.0.0.1 192.168.72.216 localhost minikube no-preload-066175]
	I0717 18:40:01.628522   80401 provision.go:177] copyRemoteCerts
	I0717 18:40:01.628574   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:01.628596   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.631306   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.631714   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.631761   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.631876   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.632050   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.632210   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.632330   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:01.711344   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:01.738565   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 18:40:01.765888   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:40:01.790852   80401 provision.go:87] duration metric: took 250.181586ms to configureAuth
	I0717 18:40:01.790874   80401 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:01.791046   80401 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:40:01.791111   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.793530   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.793922   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.793945   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.794095   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.794323   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.794497   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.794635   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.794786   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.794955   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.794969   80401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:02.032506   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:02.032543   80401 machine.go:97] duration metric: took 811.687511ms to provisionDockerMachine
	I0717 18:40:02.032554   80401 start.go:293] postStartSetup for "no-preload-066175" (driver="kvm2")
	I0717 18:40:02.032567   80401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:02.032596   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.032921   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:02.032966   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.035429   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.035731   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.035767   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.035921   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.036081   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.036351   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.036493   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.114601   80401 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:02.118230   80401 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:02.118247   80401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:02.118308   80401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:02.118384   80401 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:02.118592   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:02.126753   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:02.148028   80401 start.go:296] duration metric: took 115.461293ms for postStartSetup
	I0717 18:40:02.148066   80401 fix.go:56] duration metric: took 16.582258787s for fixHost
	I0717 18:40:02.148084   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.150550   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.150917   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.150949   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.151061   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.151242   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.151394   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.151513   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.151658   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:02.151828   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:02.151841   80401 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:02.249303   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241602.223072082
	
	I0717 18:40:02.249334   80401 fix.go:216] guest clock: 1721241602.223072082
	I0717 18:40:02.249344   80401 fix.go:229] Guest: 2024-07-17 18:40:02.223072082 +0000 UTC Remote: 2024-07-17 18:40:02.14806999 +0000 UTC m=+268.060359078 (delta=75.002092ms)
	I0717 18:40:02.249388   80401 fix.go:200] guest clock delta is within tolerance: 75.002092ms
	I0717 18:40:02.249396   80401 start.go:83] releasing machines lock for "no-preload-066175", held for 16.683615057s
	I0717 18:40:02.249442   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.249735   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:02.252545   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.252896   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.252929   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.253053   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253516   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253700   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253770   80401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:02.253803   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.253913   80401 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:02.253937   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.256152   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256462   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.256501   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256558   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.256616   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256718   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.256879   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.257013   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.257021   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.257038   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.257158   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.257312   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.257469   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.257604   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.376103   80401 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:02.381639   80401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:02.529357   80401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:02.536396   80401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:02.536463   80401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:02.555045   80401 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:02.555067   80401 start.go:495] detecting cgroup driver to use...
	I0717 18:40:02.555130   80401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:02.570540   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:02.583804   80401 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:02.583867   80401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:02.596657   80401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:02.610371   80401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:02.717489   80401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:02.875146   80401 docker.go:233] disabling docker service ...
	I0717 18:40:02.875235   80401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:02.895657   80401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:02.908366   80401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:03.018375   80401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:03.143922   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:03.160599   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:03.180643   80401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 18:40:03.180709   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.190040   80401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:03.190097   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.199275   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.208647   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.217750   80401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:03.226808   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.235779   80401 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.251451   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.261476   80401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:03.269978   80401 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:03.270028   80401 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:03.280901   80401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:03.290184   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:03.409167   80401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:03.541153   80401 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:03.541218   80401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:03.546012   80401 start.go:563] Will wait 60s for crictl version
	I0717 18:40:03.546059   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:03.549567   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:03.588396   80401 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:03.588467   80401 ssh_runner.go:195] Run: crio --version
	I0717 18:40:03.622472   80401 ssh_runner.go:195] Run: crio --version
	I0717 18:40:03.652180   80401 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 18:40:03.653613   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:03.656560   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:03.656959   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:03.656987   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:03.657222   80401 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:03.661102   80401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:03.673078   80401 kubeadm.go:883] updating cluster {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:03.673212   80401 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 18:40:03.673248   80401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:03.703959   80401 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 18:40:03.703986   80401 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:40:03.704042   80401 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:03.704078   80401 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.704095   80401 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.704114   80401 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.704150   80401 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.704077   80401 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.704168   80401 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 18:40:03.704243   80401 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.705787   80401 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:03.705795   80401 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.705801   80401 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.705787   80401 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.705792   80401 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.705816   80401 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.705829   80401 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 18:40:03.706094   80401 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.925413   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.930827   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 18:40:03.963901   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.964215   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.966162   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.970852   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.973664   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.997849   80401 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 18:40:03.997912   80401 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.997969   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118851   80401 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 18:40:04.118888   80401 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:04.118892   80401 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 18:40:04.118924   80401 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:04.118934   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118943   80401 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 18:40:04.118969   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118969   80401 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:04.119001   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.119027   80401 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 18:40:04.119058   80401 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:04.119089   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:04.119104   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.119065   80401 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 18:40:04.119136   80401 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:04.119159   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:02.275985   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .Start
	I0717 18:40:02.276143   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring networks are active...
	I0717 18:40:02.276898   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network default is active
	I0717 18:40:02.277333   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network mk-old-k8s-version-019549 is active
	I0717 18:40:02.277796   80857 main.go:141] libmachine: (old-k8s-version-019549) Getting domain xml...
	I0717 18:40:02.278481   80857 main.go:141] libmachine: (old-k8s-version-019549) Creating domain...
	I0717 18:40:03.571325   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting to get IP...
	I0717 18:40:03.572359   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.572836   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.572968   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.572816   81751 retry.go:31] will retry after 301.991284ms: waiting for machine to come up
	I0717 18:40:03.876263   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.876688   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.876715   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.876637   81751 retry.go:31] will retry after 286.461163ms: waiting for machine to come up
	I0717 18:40:04.165366   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.165873   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.165902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.165811   81751 retry.go:31] will retry after 383.479108ms: waiting for machine to come up
	I0717 18:40:04.551152   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.551615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.551650   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.551589   81751 retry.go:31] will retry after 429.076714ms: waiting for machine to come up
	I0717 18:40:04.982157   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.982517   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.982545   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.982470   81751 retry.go:31] will retry after 553.684035ms: waiting for machine to come up
	I0717 18:40:04.122952   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:04.130590   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:04.130741   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:04.200609   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:04.200631   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:04.200643   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 18:40:04.200728   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:04.200741   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.200815   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.212034   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 18:40:04.212057   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 18:40:04.212113   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:04.212123   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:04.259447   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:04.259525   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 18:40:04.259548   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.259552   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:04.259553   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 18:40:04.259534   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:04.259588   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.259591   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 18:40:04.259628   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 18:40:04.259639   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:04.550060   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:06.236639   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.976976668s)
	I0717 18:40:06.236683   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 18:40:06.236691   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.97711629s)
	I0717 18:40:06.236718   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 18:40:06.236732   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.977125153s)
	I0717 18:40:06.236752   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 18:40:06.236776   80401 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:06.236854   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:06.236781   80401 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.68669473s)
	I0717 18:40:06.236908   80401 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 18:40:06.236951   80401 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:06.236994   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:08.107122   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.870244887s)
	I0717 18:40:08.107152   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 18:40:08.107175   80401 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:08.107203   80401 ssh_runner.go:235] Completed: which crictl: (1.870188554s)
	I0717 18:40:08.107224   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:08.107261   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:08.146817   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 18:40:08.146932   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:05.538229   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:05.538753   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:05.538777   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:05.538702   81751 retry.go:31] will retry after 747.130907ms: waiting for machine to come up
	I0717 18:40:06.287146   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:06.287626   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:06.287665   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:06.287581   81751 retry.go:31] will retry after 1.171580264s: waiting for machine to come up
	I0717 18:40:07.461393   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:07.462015   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:07.462046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:07.461963   81751 retry.go:31] will retry after 1.199265198s: waiting for machine to come up
	I0717 18:40:08.663340   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:08.663789   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:08.663815   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:08.663745   81751 retry.go:31] will retry after 1.621895351s: waiting for machine to come up
	I0717 18:40:11.404193   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.296944718s)
	I0717 18:40:11.404228   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 18:40:11.404248   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:11.404245   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.257289666s)
	I0717 18:40:11.404272   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 18:40:11.404294   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:13.370389   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966067238s)
	I0717 18:40:13.370426   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 18:40:13.370455   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:13.370505   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:10.287596   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:10.288019   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:10.288046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:10.287964   81751 retry.go:31] will retry after 1.748504204s: waiting for machine to come up
	I0717 18:40:12.038137   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:12.038582   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:12.038615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:12.038532   81751 retry.go:31] will retry after 2.477996004s: waiting for machine to come up
	I0717 18:40:14.517788   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:14.518175   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:14.518203   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:14.518123   81751 retry.go:31] will retry after 3.29313184s: waiting for machine to come up
	I0717 18:40:19.093608   81068 start.go:364] duration metric: took 3m4.523289209s to acquireMachinesLock for "default-k8s-diff-port-022930"
	I0717 18:40:19.093694   81068 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:19.093705   81068 fix.go:54] fixHost starting: 
	I0717 18:40:19.094122   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:19.094157   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:19.113793   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0717 18:40:19.114236   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:19.114755   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:40:19.114775   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:19.115110   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:19.115294   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:19.115434   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:40:19.117072   81068 fix.go:112] recreateIfNeeded on default-k8s-diff-port-022930: state=Stopped err=<nil>
	I0717 18:40:19.117109   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	W0717 18:40:19.117256   81068 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:19.120986   81068 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-022930" ...
	I0717 18:40:15.214734   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.844202729s)
	I0717 18:40:15.214756   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 18:40:15.214777   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:15.214814   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:17.066570   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.851726063s)
	I0717 18:40:17.066604   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 18:40:17.066629   80401 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:17.066679   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:17.703556   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 18:40:17.703614   80401 cache_images.go:123] Successfully loaded all cached images
	I0717 18:40:17.703624   80401 cache_images.go:92] duration metric: took 13.999623105s to LoadCachedImages
	I0717 18:40:17.703638   80401 kubeadm.go:934] updating node { 192.168.72.216 8443 v1.31.0-beta.0 crio true true} ...
	I0717 18:40:17.703754   80401 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-066175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:17.703830   80401 ssh_runner.go:195] Run: crio config
	I0717 18:40:17.753110   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:40:17.753138   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:17.753159   80401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:17.753190   80401 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.216 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-066175 NodeName:no-preload-066175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:40:17.753404   80401 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-066175"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:17.753492   80401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 18:40:17.763417   80401 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:17.763491   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:17.772139   80401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 18:40:17.786982   80401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 18:40:17.801327   80401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 18:40:17.816796   80401 ssh_runner.go:195] Run: grep 192.168.72.216	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:17.820354   80401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:17.834155   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:17.970222   80401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:17.989953   80401 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175 for IP: 192.168.72.216
	I0717 18:40:17.989977   80401 certs.go:194] generating shared ca certs ...
	I0717 18:40:17.989998   80401 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:17.990160   80401 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:17.990217   80401 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:17.990231   80401 certs.go:256] generating profile certs ...
	I0717 18:40:17.990365   80401 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.key
	I0717 18:40:17.990460   80401 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672
	I0717 18:40:17.990509   80401 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key
	I0717 18:40:17.990679   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:17.990723   80401 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:17.990740   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:17.990772   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:17.990813   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:17.990846   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:17.990905   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:17.991590   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:18.035349   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:18.079539   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:18.110382   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:18.135920   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:40:18.168675   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:18.196132   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:18.230418   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:40:18.254319   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:18.277293   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:18.301416   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:18.330021   80401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:18.348803   80401 ssh_runner.go:195] Run: openssl version
	I0717 18:40:18.355126   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:18.366004   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.370221   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.370287   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.375799   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:18.385991   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:18.396141   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.400451   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.400526   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.406203   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:18.419059   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:18.429450   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.433742   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.433794   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.439261   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:18.450327   80401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:18.454734   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:18.460256   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:18.465766   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:18.471349   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:18.476780   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:18.482509   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:18.488138   80401 kubeadm.go:392] StartCluster: {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:18.488229   80401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:18.488270   80401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:18.532219   80401 cri.go:89] found id: ""
	I0717 18:40:18.532318   80401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:18.542632   80401 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:18.542655   80401 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:18.542699   80401 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:18.552352   80401 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:18.553351   80401 kubeconfig.go:125] found "no-preload-066175" server: "https://192.168.72.216:8443"
	I0717 18:40:18.555295   80401 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:18.565857   80401 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.216
	I0717 18:40:18.565892   80401 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:18.565905   80401 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:18.565958   80401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:18.605512   80401 cri.go:89] found id: ""
	I0717 18:40:18.605593   80401 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:18.622235   80401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:18.633175   80401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:18.633196   80401 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:18.633241   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:40:18.641969   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:18.642023   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:18.651017   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:40:18.659619   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:18.659667   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:18.668008   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:40:18.675985   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:18.676037   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:18.685937   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:40:18.695574   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:18.695624   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:18.706040   80401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:18.717397   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:18.836009   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:19.122366   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Start
	I0717 18:40:19.122530   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring networks are active...
	I0717 18:40:19.123330   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring network default is active
	I0717 18:40:19.123832   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring network mk-default-k8s-diff-port-022930 is active
	I0717 18:40:19.124268   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Getting domain xml...
	I0717 18:40:19.124922   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Creating domain...
	I0717 18:40:17.813673   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814213   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has current primary IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814242   80857 main.go:141] libmachine: (old-k8s-version-019549) Found IP for machine: 192.168.39.128
	I0717 18:40:17.814277   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserving static IP address...
	I0717 18:40:17.814720   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserved static IP address: 192.168.39.128
	I0717 18:40:17.814738   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting for SSH to be available...
	I0717 18:40:17.814762   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.814783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | skip adding static IP to network mk-old-k8s-version-019549 - found existing host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"}
	I0717 18:40:17.814796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Getting to WaitForSSH function...
	I0717 18:40:17.817314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817714   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.817743   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH client type: external
	I0717 18:40:17.817944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa (-rw-------)
	I0717 18:40:17.817971   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:17.817984   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | About to run SSH command:
	I0717 18:40:17.818000   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | exit 0
	I0717 18:40:17.945902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:17.946262   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetConfigRaw
	I0717 18:40:17.946907   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:17.949757   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950158   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.950178   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950474   80857 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/config.json ...
	I0717 18:40:17.950706   80857 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:17.950728   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:17.950941   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:17.953738   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954141   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.954184   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954282   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:17.954456   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954617   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954790   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:17.954957   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:17.955121   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:17.955131   80857 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:18.061082   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:18.061113   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061405   80857 buildroot.go:166] provisioning hostname "old-k8s-version-019549"
	I0717 18:40:18.061432   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061685   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.064855   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.065348   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065537   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.065777   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.065929   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.066118   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.066329   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.066547   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.066564   80857 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-019549 && echo "old-k8s-version-019549" | sudo tee /etc/hostname
	I0717 18:40:18.191467   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-019549
	
	I0717 18:40:18.191517   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.194917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195455   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.195502   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195714   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.195908   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196105   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196288   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.196483   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.196708   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.196731   80857 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-019549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-019549/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-019549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:18.315020   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:18.315047   80857 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:18.315065   80857 buildroot.go:174] setting up certificates
	I0717 18:40:18.315078   80857 provision.go:84] configureAuth start
	I0717 18:40:18.315090   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.315358   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:18.318342   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.318796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.318826   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.319078   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.321562   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.321914   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.321944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.322125   80857 provision.go:143] copyHostCerts
	I0717 18:40:18.322208   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:18.322226   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:18.322309   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:18.322443   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:18.322457   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:18.322492   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:18.322579   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:18.322591   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:18.322621   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:18.322727   80857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-019549 san=[127.0.0.1 192.168.39.128 localhost minikube old-k8s-version-019549]
	I0717 18:40:18.397216   80857 provision.go:177] copyRemoteCerts
	I0717 18:40:18.397266   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:18.397301   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.399887   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400237   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.400286   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400531   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.400732   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.400880   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.401017   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.490677   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:18.518392   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 18:40:18.543930   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:18.567339   80857 provision.go:87] duration metric: took 252.250106ms to configureAuth
	I0717 18:40:18.567360   80857 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:18.567539   80857 config.go:182] Loaded profile config "old-k8s-version-019549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 18:40:18.567610   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.570373   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.570809   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570943   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.571140   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571281   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.571624   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.571841   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.571862   80857 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:18.845725   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:18.845752   80857 machine.go:97] duration metric: took 895.03234ms to provisionDockerMachine
	I0717 18:40:18.845765   80857 start.go:293] postStartSetup for "old-k8s-version-019549" (driver="kvm2")
	I0717 18:40:18.845778   80857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:18.845828   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:18.846158   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:18.846192   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.848760   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849264   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.849293   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.849649   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.849843   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.850007   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.938026   80857 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:18.943223   80857 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:18.943254   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:18.943317   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:18.943417   80857 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:18.943509   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:18.954887   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:18.976980   80857 start.go:296] duration metric: took 131.200877ms for postStartSetup
	I0717 18:40:18.977022   80857 fix.go:56] duration metric: took 16.727466541s for fixHost
	I0717 18:40:18.977041   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.980020   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980384   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.980417   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980533   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.980723   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.980903   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.981059   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.981207   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.981406   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.981418   80857 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:19.093409   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241619.063415252
	
	I0717 18:40:19.093433   80857 fix.go:216] guest clock: 1721241619.063415252
	I0717 18:40:19.093443   80857 fix.go:229] Guest: 2024-07-17 18:40:19.063415252 +0000 UTC Remote: 2024-07-17 18:40:18.97702579 +0000 UTC m=+213.960604949 (delta=86.389462ms)
	I0717 18:40:19.093494   80857 fix.go:200] guest clock delta is within tolerance: 86.389462ms
	I0717 18:40:19.093506   80857 start.go:83] releasing machines lock for "old-k8s-version-019549", held for 16.843984035s
	I0717 18:40:19.093543   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.093842   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:19.096443   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.096817   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.096848   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.097035   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097579   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097769   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097859   80857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:19.097915   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.098007   80857 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:19.098031   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.100775   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101108   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101160   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101185   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101412   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101595   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.101606   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101637   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101718   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.101789   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101853   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.101975   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.102092   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.102212   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.218596   80857 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:19.225675   80857 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:19.371453   80857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:19.381365   80857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:19.381438   80857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:19.397504   80857 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:19.397530   80857 start.go:495] detecting cgroup driver to use...
	I0717 18:40:19.397597   80857 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:19.412150   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:19.425495   80857 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:19.425578   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:19.438662   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:19.451953   80857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:19.578702   80857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:19.733328   80857 docker.go:233] disabling docker service ...
	I0717 18:40:19.733411   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:19.753615   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:19.774057   80857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:19.933901   80857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:20.049914   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:20.063500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:20.082560   80857 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 18:40:20.082611   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.092857   80857 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:20.092912   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.103283   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.112612   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.122671   80857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:20.132892   80857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:20.145445   80857 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:20.145501   80857 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:20.158958   80857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:20.168377   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:20.307224   80857 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:20.453407   80857 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:20.453490   80857 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:20.458007   80857 start.go:563] Will wait 60s for crictl version
	I0717 18:40:20.458062   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:20.461420   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:20.507358   80857 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:20.507426   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.542812   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.577280   80857 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 18:40:20.432028   80401 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.59597321s)
	I0717 18:40:20.432063   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.633854   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.728474   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.879989   80401 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:20.880079   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.380421   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.880208   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.912390   80401 api_server.go:72] duration metric: took 1.032400417s to wait for apiserver process to appear ...
	I0717 18:40:21.912419   80401 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:40:21.912443   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:21.912904   80401 api_server.go:269] stopped: https://192.168.72.216:8443/healthz: Get "https://192.168.72.216:8443/healthz": dial tcp 192.168.72.216:8443: connect: connection refused
	I0717 18:40:22.412598   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:20.397025   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting to get IP...
	I0717 18:40:20.398122   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.398525   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.398610   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.398506   81910 retry.go:31] will retry after 285.646022ms: waiting for machine to come up
	I0717 18:40:20.686556   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.687151   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.687263   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.687202   81910 retry.go:31] will retry after 239.996ms: waiting for machine to come up
	I0717 18:40:20.928604   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.929111   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.929139   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.929057   81910 retry.go:31] will retry after 487.674422ms: waiting for machine to come up
	I0717 18:40:21.418475   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.418928   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.418952   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:21.418872   81910 retry.go:31] will retry after 439.363216ms: waiting for machine to come up
	I0717 18:40:21.859546   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.860241   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.860273   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:21.860145   81910 retry.go:31] will retry after 598.922134ms: waiting for machine to come up
	I0717 18:40:22.461026   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:22.461509   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:22.461542   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:22.461457   81910 retry.go:31] will retry after 908.602286ms: waiting for machine to come up
	I0717 18:40:23.371582   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:23.372143   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:23.372170   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:23.372093   81910 retry.go:31] will retry after 893.690966ms: waiting for machine to come up
	I0717 18:40:24.267377   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:24.267908   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:24.267935   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:24.267873   81910 retry.go:31] will retry after 1.468061022s: waiting for machine to come up
	I0717 18:40:20.578679   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:20.581569   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.581933   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:20.581961   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.582197   80857 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:20.586047   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:20.598137   80857 kubeadm.go:883] updating cluster {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:20.598284   80857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:40:20.598355   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:20.646681   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:20.646757   80857 ssh_runner.go:195] Run: which lz4
	I0717 18:40:20.650691   80857 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:20.654703   80857 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:20.654730   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 18:40:22.163706   80857 crio.go:462] duration metric: took 1.513040695s to copy over tarball
	I0717 18:40:22.163783   80857 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:40:24.904256   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:24.904292   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:24.904308   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:24.971088   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:24.971120   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:24.971136   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.015832   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.015868   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:25.413309   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.418927   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.418955   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:25.913026   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.917375   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.917407   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:26.412566   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:26.419115   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:26.419140   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:26.912680   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:26.920245   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:26.920268   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:27.412854   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:27.417356   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:27.417390   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:27.912883   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:27.918242   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:27.918274   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:28.412591   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:28.419257   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:40:28.427814   80401 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:40:28.427842   80401 api_server.go:131] duration metric: took 6.515416451s to wait for apiserver health ...
	I0717 18:40:28.427854   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:40:28.427863   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:28.429828   80401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:40:28.431012   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:40:28.444822   80401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:40:28.465212   80401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:40:28.477639   80401 system_pods.go:59] 8 kube-system pods found
	I0717 18:40:28.477691   80401 system_pods.go:61] "coredns-5cfdc65f69-spj2w" [6849b651-9346-4d96-97a7-88eca7bbd50a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:40:28.477706   80401 system_pods.go:61] "etcd-no-preload-066175" [be012488-220b-421d-bf16-a3623fafb8fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:40:28.477721   80401 system_pods.go:61] "kube-apiserver-no-preload-066175" [4292a786-61f3-405d-8784-ec8a58e1b124] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:40:28.477731   80401 system_pods.go:61] "kube-controller-manager-no-preload-066175" [937a48f4-7fca-4cee-bb50-51f1720960da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:40:28.477739   80401 system_pods.go:61] "kube-proxy-tn5xn" [f0a910b3-98b6-470f-a5a2-e49369ecb733] Running
	I0717 18:40:28.477748   80401 system_pods.go:61] "kube-scheduler-no-preload-066175" [ffa2475c-7a5a-4988-89a2-4727e07356cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:40:28.477756   80401 system_pods.go:61] "metrics-server-78fcd8795b-mbtvd" [ccd7a565-52ef-49be-b659-31ae20af537a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:40:28.477761   80401 system_pods.go:61] "storage-provisioner" [19914ecc-2fcc-4cb8-bd78-fb6891dcf85d] Running
	I0717 18:40:28.477769   80401 system_pods.go:74] duration metric: took 12.536267ms to wait for pod list to return data ...
	I0717 18:40:28.477777   80401 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:40:28.482322   80401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:40:28.482348   80401 node_conditions.go:123] node cpu capacity is 2
	I0717 18:40:28.482368   80401 node_conditions.go:105] duration metric: took 4.585233ms to run NodePressure ...
	I0717 18:40:28.482387   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:28.768656   80401 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:40:28.773308   80401 kubeadm.go:739] kubelet initialised
	I0717 18:40:28.773330   80401 kubeadm.go:740] duration metric: took 4.654448ms waiting for restarted kubelet to initialise ...
	I0717 18:40:28.773338   80401 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:40:28.778778   80401 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:25.738071   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:25.738580   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:25.738611   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:25.738538   81910 retry.go:31] will retry after 1.505740804s: waiting for machine to come up
	I0717 18:40:27.246293   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:27.246651   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:27.246674   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:27.246606   81910 retry.go:31] will retry after 1.574253799s: waiting for machine to come up
	I0717 18:40:28.822159   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:28.822546   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:28.822597   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:28.822517   81910 retry.go:31] will retry after 2.132842884s: waiting for machine to come up
	I0717 18:40:25.307875   80857 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.144060111s)
	I0717 18:40:25.307903   80857 crio.go:469] duration metric: took 3.144169984s to extract the tarball
	I0717 18:40:25.307914   80857 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:40:25.354436   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:25.404799   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:25.404827   80857 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:40:25.404884   80857 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.404936   80857 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 18:40:25.404908   80857 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.404952   80857 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.404998   80857 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.405010   80857 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.406661   80857 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.406667   80857 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 18:40:25.406690   80857 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.407119   80857 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.619950   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 18:40:25.635075   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.641561   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.647362   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.648054   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.649684   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.664183   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.709163   80857 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 18:40:25.709227   80857 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 18:40:25.709275   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.760931   80857 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 18:40:25.760994   80857 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.761042   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.779324   80857 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 18:40:25.779378   80857 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.779429   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799052   80857 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 18:40:25.799097   80857 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.799106   80857 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 18:40:25.799131   80857 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 18:40:25.799190   80857 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.799233   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799136   80857 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.799148   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799298   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.806973   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 18:40:25.807041   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.807066   80857 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 18:40:25.807095   80857 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.807126   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.807137   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.807237   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.811025   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.811114   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.935792   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 18:40:25.935853   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 18:40:25.935863   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 18:40:25.935934   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.935973   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 18:40:25.935996   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 18:40:25.940351   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 18:40:25.970107   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 18:40:26.231894   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:26.372230   80857 cache_images.go:92] duration metric: took 967.383323ms to LoadCachedImages
	W0717 18:40:26.372327   80857 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0717 18:40:26.372346   80857 kubeadm.go:934] updating node { 192.168.39.128 8443 v1.20.0 crio true true} ...
	I0717 18:40:26.372517   80857 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-019549 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:26.372613   80857 ssh_runner.go:195] Run: crio config
	I0717 18:40:26.416155   80857 cni.go:84] Creating CNI manager for ""
	I0717 18:40:26.416181   80857 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:26.416196   80857 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:26.416229   80857 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-019549 NodeName:old-k8s-version-019549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 18:40:26.416526   80857 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-019549"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:26.416595   80857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 18:40:26.426941   80857 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:26.427006   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:26.437810   80857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 18:40:26.460046   80857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:40:26.482521   80857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 18:40:26.502536   80857 ssh_runner.go:195] Run: grep 192.168.39.128	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:26.506513   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:26.520895   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:26.648931   80857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:26.665278   80857 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549 for IP: 192.168.39.128
	I0717 18:40:26.665300   80857 certs.go:194] generating shared ca certs ...
	I0717 18:40:26.665329   80857 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:26.665508   80857 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:26.665561   80857 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:26.665574   80857 certs.go:256] generating profile certs ...
	I0717 18:40:26.665693   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/client.key
	I0717 18:40:26.665780   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key.9c9b0a7e
	I0717 18:40:26.665836   80857 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key
	I0717 18:40:26.665998   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:26.666049   80857 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:26.666063   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:26.666095   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:26.666128   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:26.666167   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:26.666225   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:26.667047   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:26.713984   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:26.742617   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:26.770441   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:26.795098   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 18:40:26.825038   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:26.861300   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:26.901664   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:40:26.926357   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:26.948986   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:26.973248   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:26.994642   80857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:27.010158   80857 ssh_runner.go:195] Run: openssl version
	I0717 18:40:27.015861   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:27.026221   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030496   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030567   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.035862   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:27.046312   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:27.057117   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061775   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061824   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.067535   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:27.079022   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:27.090009   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094688   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094768   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.100404   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:27.110653   80857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:27.115117   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:27.120633   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:27.126070   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:27.131500   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:27.137035   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:27.142426   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:27.147638   80857 kubeadm.go:392] StartCluster: {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:27.147756   80857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:27.147816   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.187433   80857 cri.go:89] found id: ""
	I0717 18:40:27.187498   80857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:27.197001   80857 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:27.197020   80857 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:27.197070   80857 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:27.206758   80857 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:27.207822   80857 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-019549" does not appear in /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:40:27.208505   80857 kubeconfig.go:62] /home/jenkins/minikube-integration/19283-14386/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-019549" cluster setting kubeconfig missing "old-k8s-version-019549" context setting]
	I0717 18:40:27.209497   80857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:27.212786   80857 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:27.222612   80857 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.128
	I0717 18:40:27.222649   80857 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:27.222663   80857 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:27.222721   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.268127   80857 cri.go:89] found id: ""
	I0717 18:40:27.268205   80857 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:27.284334   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:27.293669   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:27.293691   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:27.293743   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:40:27.305348   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:27.305437   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:27.317749   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:40:27.328481   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:27.328547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:27.337574   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.346242   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:27.346299   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.354946   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:40:27.363296   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:27.363350   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:27.371925   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:27.384020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:27.571539   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:28.767574   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.19599736s)
	I0717 18:40:28.767612   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.011512   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.151980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.258796   80857 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:29.258886   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:29.759072   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.787614   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:33.285208   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:30.956634   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:30.957109   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:30.957140   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:30.957059   81910 retry.go:31] will retry after 3.31337478s: waiting for machine to come up
	I0717 18:40:34.272528   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:34.273063   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:34.273094   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:34.273032   81910 retry.go:31] will retry after 3.207729964s: waiting for machine to come up
	I0717 18:40:30.259921   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.758948   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.258967   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.759872   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.259187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.759299   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.259080   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.759583   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.259740   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.759068   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.697183   80180 start.go:364] duration metric: took 48.129837953s to acquireMachinesLock for "embed-certs-527415"
	I0717 18:40:38.697248   80180 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:38.697260   80180 fix.go:54] fixHost starting: 
	I0717 18:40:38.697680   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:38.697712   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:38.713575   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0717 18:40:38.713926   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:38.714396   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:40:38.714422   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:38.714762   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:38.714949   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:38.715109   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:40:38.716552   80180 fix.go:112] recreateIfNeeded on embed-certs-527415: state=Stopped err=<nil>
	I0717 18:40:38.716574   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	W0717 18:40:38.716775   80180 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:38.718610   80180 out.go:177] * Restarting existing kvm2 VM for "embed-certs-527415" ...
	I0717 18:40:35.285888   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:36.285651   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:36.285676   80401 pod_ready.go:81] duration metric: took 7.506876819s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.285686   80401 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.292615   80401 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:36.292638   80401 pod_ready.go:81] duration metric: took 6.944487ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.292650   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:38.298338   80401 pod_ready.go:102] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:37.484312   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.484723   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has current primary IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.484740   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Found IP for machine: 192.168.50.245
	I0717 18:40:37.484753   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Reserving static IP address...
	I0717 18:40:37.485137   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-022930", mac: "52:54:00:5d:76:ae", ip: "192.168.50.245"} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.485161   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Reserved static IP address: 192.168.50.245
	I0717 18:40:37.485174   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | skip adding static IP to network mk-default-k8s-diff-port-022930 - found existing host DHCP lease matching {name: "default-k8s-diff-port-022930", mac: "52:54:00:5d:76:ae", ip: "192.168.50.245"}
	I0717 18:40:37.485191   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Getting to WaitForSSH function...
	I0717 18:40:37.485207   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for SSH to be available...
	I0717 18:40:37.487397   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.487767   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.487796   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.487899   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Using SSH client type: external
	I0717 18:40:37.487927   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa (-rw-------)
	I0717 18:40:37.487961   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:37.487973   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | About to run SSH command:
	I0717 18:40:37.487992   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | exit 0
	I0717 18:40:37.608746   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:37.609085   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetConfigRaw
	I0717 18:40:37.609739   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:37.612293   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.612668   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.612689   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.612936   81068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/config.json ...
	I0717 18:40:37.613176   81068 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:37.613194   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:37.613391   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.615483   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.615774   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.615804   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.615881   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.616038   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.616187   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.616306   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.616470   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.616676   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.616691   81068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:37.720971   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:37.721004   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.721307   81068 buildroot.go:166] provisioning hostname "default-k8s-diff-port-022930"
	I0717 18:40:37.721340   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.721654   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.724162   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.724507   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.724535   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.724712   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.724912   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.725090   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.725259   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.725430   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.725635   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.725651   81068 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-022930 && echo "default-k8s-diff-port-022930" | sudo tee /etc/hostname
	I0717 18:40:37.837366   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-022930
	
	I0717 18:40:37.837389   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.839920   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.840291   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.840325   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.840450   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.840654   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.840830   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.840970   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.841130   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.841344   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.841363   81068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-022930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-022930/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-022930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:37.948311   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:37.948343   81068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:37.948394   81068 buildroot.go:174] setting up certificates
	I0717 18:40:37.948406   81068 provision.go:84] configureAuth start
	I0717 18:40:37.948416   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.948732   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:37.951214   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.951548   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.951578   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.951693   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.953805   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.954086   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.954105   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.954250   81068 provision.go:143] copyHostCerts
	I0717 18:40:37.954318   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:37.954334   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:37.954401   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:37.954531   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:37.954542   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:37.954575   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:37.954657   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:37.954667   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:37.954694   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:37.954758   81068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-022930 san=[127.0.0.1 192.168.50.245 default-k8s-diff-port-022930 localhost minikube]
	I0717 18:40:38.054084   81068 provision.go:177] copyRemoteCerts
	I0717 18:40:38.054136   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:38.054160   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.056841   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.057265   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.057300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.057483   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.057683   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.057839   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.057982   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.138206   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:38.163105   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 18:40:38.188449   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:38.214829   81068 provision.go:87] duration metric: took 266.409028ms to configureAuth
	I0717 18:40:38.214853   81068 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:38.215005   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:40:38.215068   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.217684   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.218010   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.218037   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.218247   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.218419   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.218573   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.218706   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.218874   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:38.219021   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:38.219039   81068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:38.471162   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:38.471191   81068 machine.go:97] duration metric: took 858.000457ms to provisionDockerMachine
	I0717 18:40:38.471206   81068 start.go:293] postStartSetup for "default-k8s-diff-port-022930" (driver="kvm2")
	I0717 18:40:38.471220   81068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:38.471247   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.471558   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:38.471590   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.474241   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.474673   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.474704   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.474868   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.475085   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.475245   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.475524   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.554800   81068 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:38.558601   81068 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:38.558624   81068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:38.558685   81068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:38.558769   81068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:38.558875   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:38.567664   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:38.589713   81068 start.go:296] duration metric: took 118.491854ms for postStartSetup
	I0717 18:40:38.589754   81068 fix.go:56] duration metric: took 19.496049651s for fixHost
	I0717 18:40:38.589777   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.592433   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.592813   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.592860   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.592989   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.593188   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.593368   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.593536   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.593738   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:38.593937   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:38.593955   81068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:38.697050   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241638.669121206
	
	I0717 18:40:38.697075   81068 fix.go:216] guest clock: 1721241638.669121206
	I0717 18:40:38.697085   81068 fix.go:229] Guest: 2024-07-17 18:40:38.669121206 +0000 UTC Remote: 2024-07-17 18:40:38.589759024 +0000 UTC m=+204.149894792 (delta=79.362182ms)
	I0717 18:40:38.697108   81068 fix.go:200] guest clock delta is within tolerance: 79.362182ms
	I0717 18:40:38.697118   81068 start.go:83] releasing machines lock for "default-k8s-diff-port-022930", held for 19.603450588s
	I0717 18:40:38.697143   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.697381   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:38.700059   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.700504   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.700529   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.700764   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701246   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701541   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701619   81068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:38.701672   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.701777   81068 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:38.701797   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.704169   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704478   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.704503   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704657   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704684   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.704849   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.705002   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.705164   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.705262   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.705300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.705496   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.705663   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.705817   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.705967   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.825607   81068 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:38.831484   81068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:38.972775   81068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:38.978446   81068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:38.978502   81068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:38.999160   81068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:38.999180   81068 start.go:495] detecting cgroup driver to use...
	I0717 18:40:38.999234   81068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:39.016133   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:39.029031   81068 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:39.029083   81068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:39.042835   81068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:39.056981   81068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:39.168521   81068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:39.306630   81068 docker.go:233] disabling docker service ...
	I0717 18:40:39.306704   81068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:39.320435   81068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:39.337780   81068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:35.259643   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:35.759432   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.259818   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.759627   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.259968   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.758933   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.259980   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.759776   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.259988   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.496847   81068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:39.627783   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:39.641684   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:39.659183   81068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:40:39.659250   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.669034   81068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:39.669100   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.678708   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.688822   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.699484   81068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:39.709505   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.720715   81068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.736510   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.746991   81068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:39.757265   81068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:39.757320   81068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:39.774777   81068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:39.789593   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:39.907377   81068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:40.039498   81068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:40.039592   81068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:40.044502   81068 start.go:563] Will wait 60s for crictl version
	I0717 18:40:40.044558   81068 ssh_runner.go:195] Run: which crictl
	I0717 18:40:40.048708   81068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:40.087738   81068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:40.087822   81068 ssh_runner.go:195] Run: crio --version
	I0717 18:40:40.115460   81068 ssh_runner.go:195] Run: crio --version
	I0717 18:40:40.150181   81068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:40:38.719828   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Start
	I0717 18:40:38.720004   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring networks are active...
	I0717 18:40:38.720983   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring network default is active
	I0717 18:40:38.721537   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring network mk-embed-certs-527415 is active
	I0717 18:40:38.721945   80180 main.go:141] libmachine: (embed-certs-527415) Getting domain xml...
	I0717 18:40:38.722654   80180 main.go:141] libmachine: (embed-certs-527415) Creating domain...
	I0717 18:40:40.007036   80180 main.go:141] libmachine: (embed-certs-527415) Waiting to get IP...
	I0717 18:40:40.007975   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.008511   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.008608   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.008495   82069 retry.go:31] will retry after 268.334211ms: waiting for machine to come up
	I0717 18:40:40.278129   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.278639   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.278670   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.278585   82069 retry.go:31] will retry after 350.00147ms: waiting for machine to come up
	I0717 18:40:40.630229   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.630819   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.630853   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.630768   82069 retry.go:31] will retry after 411.079615ms: waiting for machine to come up
	I0717 18:40:41.043232   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.043851   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.043880   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.043822   82069 retry.go:31] will retry after 387.726284ms: waiting for machine to come up
	I0717 18:40:41.433536   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.434058   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.434092   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.434005   82069 retry.go:31] will retry after 538.564385ms: waiting for machine to come up
	I0717 18:40:41.973917   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.974457   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.974489   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.974395   82069 retry.go:31] will retry after 778.576616ms: waiting for machine to come up
	I0717 18:40:42.754322   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:42.754872   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:42.754899   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:42.754837   82069 retry.go:31] will retry after 758.957234ms: waiting for machine to come up
	I0717 18:40:40.299673   80401 pod_ready.go:102] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:40.801297   80401 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.801325   80401 pod_ready.go:81] duration metric: took 4.508666316s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.801339   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.807354   80401 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.807372   80401 pod_ready.go:81] duration metric: took 6.024916ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.807380   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.812934   80401 pod_ready.go:92] pod "kube-proxy-tn5xn" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.812982   80401 pod_ready.go:81] duration metric: took 5.594378ms for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.812996   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.817940   80401 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.817969   80401 pod_ready.go:81] duration metric: took 4.96427ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.817982   80401 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:42.825018   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:40.151220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:40.153791   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:40.154220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:40.154246   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:40.154472   81068 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:40.159310   81068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:40.172121   81068 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:40.172256   81068 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:40:40.172307   81068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:40.215863   81068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:40:40.215940   81068 ssh_runner.go:195] Run: which lz4
	I0717 18:40:40.220502   81068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:40.224682   81068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:40.224714   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:40:41.511505   81068 crio.go:462] duration metric: took 1.291039238s to copy over tarball
	I0717 18:40:41.511574   81068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:40:43.730839   81068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.219230444s)
	I0717 18:40:43.730901   81068 crio.go:469] duration metric: took 2.219370372s to extract the tarball
	I0717 18:40:43.730912   81068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:40:43.767876   81068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:43.809466   81068 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:40:43.809494   81068 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:40:43.809505   81068 kubeadm.go:934] updating node { 192.168.50.245 8444 v1.30.2 crio true true} ...
	I0717 18:40:43.809646   81068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-022930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:43.809740   81068 ssh_runner.go:195] Run: crio config
	I0717 18:40:43.850614   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:40:43.850635   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:43.850648   81068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:43.850669   81068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-022930 NodeName:default-k8s-diff-port-022930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:40:43.850795   81068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-022930"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:43.850851   81068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:40:43.862674   81068 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:43.862733   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:43.873304   81068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 18:40:43.888884   81068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:40:43.903631   81068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 18:40:43.918768   81068 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:43.922033   81068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:43.932546   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:44.049621   81068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:44.065718   81068 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930 for IP: 192.168.50.245
	I0717 18:40:44.065747   81068 certs.go:194] generating shared ca certs ...
	I0717 18:40:44.065767   81068 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:44.065939   81068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:44.065999   81068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:44.066016   81068 certs.go:256] generating profile certs ...
	I0717 18:40:44.066149   81068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/client.key
	I0717 18:40:44.066224   81068 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.key.8aa7f0a0
	I0717 18:40:44.066284   81068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.key
	I0717 18:40:44.066445   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:44.066494   81068 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:44.066507   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:44.066548   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:44.066579   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:44.066606   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:44.066650   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:44.067421   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:44.104160   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:44.133716   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:44.161170   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:44.190489   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 18:40:44.211792   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:44.232875   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:44.255059   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:40:44.276826   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:44.298357   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:44.320634   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:44.345428   81068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:44.362934   81068 ssh_runner.go:195] Run: openssl version
	I0717 18:40:44.369764   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:44.382557   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.386445   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.386483   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.392033   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:44.401987   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:44.411437   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.415367   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.415419   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.420523   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:44.429915   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:44.439371   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.443248   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.443301   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.448380   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:44.457828   81068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:44.462151   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:44.467474   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:44.472829   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:40.259910   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:40.759917   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.259718   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.759839   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.259129   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.759772   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.259989   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.759724   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.258978   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.759594   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.515097   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:43.515595   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:43.515616   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:43.515539   82069 retry.go:31] will retry after 1.173590835s: waiting for machine to come up
	I0717 18:40:44.691027   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:44.691479   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:44.691520   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:44.691428   82069 retry.go:31] will retry after 1.594704966s: waiting for machine to come up
	I0717 18:40:46.288022   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:46.288609   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:46.288642   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:46.288549   82069 retry.go:31] will retry after 2.014912325s: waiting for machine to come up
	I0717 18:40:45.323815   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:47.324715   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:44.478397   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:44.483860   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:44.489029   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:44.494220   81068 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:44.494329   81068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:44.494381   81068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:44.534380   81068 cri.go:89] found id: ""
	I0717 18:40:44.534445   81068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:44.545270   81068 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:44.545287   81068 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:44.545328   81068 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:44.555521   81068 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:44.556584   81068 kubeconfig.go:125] found "default-k8s-diff-port-022930" server: "https://192.168.50.245:8444"
	I0717 18:40:44.558675   81068 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:44.567696   81068 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.245
	I0717 18:40:44.567727   81068 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:44.567739   81068 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:44.567787   81068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:44.605757   81068 cri.go:89] found id: ""
	I0717 18:40:44.605833   81068 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:44.622187   81068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:44.631169   81068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:44.631191   81068 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:44.631241   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 18:40:44.639194   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:44.639248   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:44.647542   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 18:40:44.655622   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:44.655708   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:44.663923   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 18:40:44.671733   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:44.671778   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:44.680375   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 18:40:44.688043   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:44.688085   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:44.697020   81068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:44.705554   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:44.812051   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.351683   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.559471   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.618086   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.678836   81068 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:45.678926   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.179998   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.679083   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.179084   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.679042   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.179150   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.195192   81068 api_server.go:72] duration metric: took 2.516354411s to wait for apiserver process to appear ...
	I0717 18:40:48.195222   81068 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:40:48.195247   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:45.259185   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:45.759765   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.259009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.759131   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.259477   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.759386   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.259977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.759374   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.259744   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.759440   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.393650   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:50.393688   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:50.393705   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:50.467974   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:50.468000   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:50.696340   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:50.702264   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:50.702308   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:51.195503   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:51.200034   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:51.200060   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:51.695594   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:51.699593   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 200:
	ok
	I0717 18:40:51.706025   81068 api_server.go:141] control plane version: v1.30.2
	I0717 18:40:51.706048   81068 api_server.go:131] duration metric: took 3.510818337s to wait for apiserver health ...
	I0717 18:40:51.706059   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:40:51.706067   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:51.707696   81068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:40:48.305798   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:48.306290   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:48.306323   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:48.306232   82069 retry.go:31] will retry after 1.789943402s: waiting for machine to come up
	I0717 18:40:50.098279   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:50.098771   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:50.098798   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:50.098734   82069 retry.go:31] will retry after 2.765766483s: waiting for machine to come up
	I0717 18:40:52.867667   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:52.868191   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:52.868212   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:52.868139   82069 retry.go:31] will retry after 2.762670644s: waiting for machine to come up
	I0717 18:40:49.325415   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:51.824015   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:53.824980   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:51.708887   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:40:51.718704   81068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:40:51.735711   81068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:40:51.745976   81068 system_pods.go:59] 8 kube-system pods found
	I0717 18:40:51.746009   81068 system_pods.go:61] "coredns-7db6d8ff4d-czk4x" [80cedf0b-248a-458e-994c-81f852d78076] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:40:51.746022   81068 system_pods.go:61] "etcd-default-k8s-diff-port-022930" [f9cf97bf-5fdc-4623-a78c-d29e0352ce40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:40:51.746036   81068 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-022930" [599cef4d-2b4d-4cd5-9552-99de585759eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:40:51.746051   81068 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-022930" [89092470-6fc9-47b2-b680-7c93945d9005] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:40:51.746062   81068 system_pods.go:61] "kube-proxy-hj7ss" [d260f18e-7a01-4f07-8c6a-87e8f6329f79] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 18:40:51.746074   81068 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-022930" [fe098478-fcb6-4084-b773-11c2cbb995aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:40:51.746083   81068 system_pods.go:61] "metrics-server-569cc877fc-j9qhx" [18efb008-e7d3-435e-9156-57c16b454d07] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:40:51.746093   81068 system_pods.go:61] "storage-provisioner" [ac856758-62ca-485f-aa31-5cd1c7d1dbe5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:40:51.746103   81068 system_pods.go:74] duration metric: took 10.373616ms to wait for pod list to return data ...
	I0717 18:40:51.746115   81068 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:40:51.749151   81068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:40:51.749173   81068 node_conditions.go:123] node cpu capacity is 2
	I0717 18:40:51.749185   81068 node_conditions.go:105] duration metric: took 3.061813ms to run NodePressure ...
	I0717 18:40:51.749204   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:52.049486   81068 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:40:52.053636   81068 kubeadm.go:739] kubelet initialised
	I0717 18:40:52.053656   81068 kubeadm.go:740] duration metric: took 4.136528ms waiting for restarted kubelet to initialise ...
	I0717 18:40:52.053665   81068 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:40:52.058401   81068 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.062406   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.062429   81068 pod_ready.go:81] duration metric: took 4.007504ms for pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.062439   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.062454   81068 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.066161   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.066185   81068 pod_ready.go:81] duration metric: took 3.717781ms for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.066202   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.066212   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.070043   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.070064   81068 pod_ready.go:81] duration metric: took 3.840533ms for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.070074   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.070080   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:54.077110   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:50.258977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.259867   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.759826   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.259016   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.759708   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.259589   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.759788   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.259753   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.759841   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.633531   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.633999   80180 main.go:141] libmachine: (embed-certs-527415) Found IP for machine: 192.168.61.90
	I0717 18:40:55.634014   80180 main.go:141] libmachine: (embed-certs-527415) Reserving static IP address...
	I0717 18:40:55.634026   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has current primary IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.634407   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.634438   80180 main.go:141] libmachine: (embed-certs-527415) Reserved static IP address: 192.168.61.90
	I0717 18:40:55.634456   80180 main.go:141] libmachine: (embed-certs-527415) DBG | skip adding static IP to network mk-embed-certs-527415 - found existing host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"}
	I0717 18:40:55.634476   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Getting to WaitForSSH function...
	I0717 18:40:55.634490   80180 main.go:141] libmachine: (embed-certs-527415) Waiting for SSH to be available...
	I0717 18:40:55.636604   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.636877   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.636904   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.637010   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH client type: external
	I0717 18:40:55.637032   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa (-rw-------)
	I0717 18:40:55.637063   80180 main.go:141] libmachine: (embed-certs-527415) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:55.637082   80180 main.go:141] libmachine: (embed-certs-527415) DBG | About to run SSH command:
	I0717 18:40:55.637094   80180 main.go:141] libmachine: (embed-certs-527415) DBG | exit 0
	I0717 18:40:55.765208   80180 main.go:141] libmachine: (embed-certs-527415) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:55.765554   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetConfigRaw
	I0717 18:40:55.766322   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:55.769331   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.769800   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.769827   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.770203   80180 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/config.json ...
	I0717 18:40:55.770593   80180 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:55.770620   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:55.770826   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:55.773837   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.774313   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.774346   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.774553   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:55.774750   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.774909   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.775060   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:55.775277   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:55.775534   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:55.775556   80180 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:55.888982   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:55.889013   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:55.889259   80180 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:40:55.889286   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:55.889501   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:55.891900   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.892284   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.892302   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.892532   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:55.892701   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.892853   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.892993   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:55.893136   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:55.893293   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:55.893310   80180 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-527415 && echo "embed-certs-527415" | sudo tee /etc/hostname
	I0717 18:40:56.018869   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-527415
	
	I0717 18:40:56.018898   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.021591   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.021888   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.021909   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.022286   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.022489   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.022646   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.022765   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.022905   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.023050   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.023066   80180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-527415' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-527415/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-527415' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:56.146411   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:56.146455   80180 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:56.146478   80180 buildroot.go:174] setting up certificates
	I0717 18:40:56.146490   80180 provision.go:84] configureAuth start
	I0717 18:40:56.146502   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:56.146767   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:56.149369   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.149725   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.149755   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.149937   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.152431   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.152753   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.152774   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.152936   80180 provision.go:143] copyHostCerts
	I0717 18:40:56.153028   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:56.153041   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:56.153096   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:56.153186   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:56.153194   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:56.153214   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:56.153277   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:56.153283   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:56.153300   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:56.153349   80180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.embed-certs-527415 san=[127.0.0.1 192.168.61.90 embed-certs-527415 localhost minikube]
	I0717 18:40:56.326978   80180 provision.go:177] copyRemoteCerts
	I0717 18:40:56.327024   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:56.327045   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.329432   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.329778   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.329809   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.329927   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.330121   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.330295   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.330409   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.415173   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:56.438501   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 18:40:56.460520   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:56.481808   80180 provision.go:87] duration metric: took 335.305142ms to configureAuth
	I0717 18:40:56.481832   80180 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:56.482001   80180 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:40:56.482063   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.484653   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.485044   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.485074   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.485222   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.485468   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.485652   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.485810   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.485953   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.486108   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.486123   80180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:56.741135   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:56.741185   80180 machine.go:97] duration metric: took 970.573336ms to provisionDockerMachine
	I0717 18:40:56.741204   80180 start.go:293] postStartSetup for "embed-certs-527415" (driver="kvm2")
	I0717 18:40:56.741221   80180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:56.741245   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.741597   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:56.741625   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.744356   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.744805   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.744831   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.745025   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.745224   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.745382   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.745549   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.835435   80180 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:56.839724   80180 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:56.839753   80180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:56.839834   80180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:56.839945   80180 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:56.840083   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:56.849582   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:56.872278   80180 start.go:296] duration metric: took 131.057656ms for postStartSetup
	I0717 18:40:56.872347   80180 fix.go:56] duration metric: took 18.175085798s for fixHost
	I0717 18:40:56.872375   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.874969   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.875308   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.875340   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.875533   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.875722   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.875955   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.876089   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.876274   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.876459   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.876469   80180 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:56.985888   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241656.959508652
	
	I0717 18:40:56.985907   80180 fix.go:216] guest clock: 1721241656.959508652
	I0717 18:40:56.985914   80180 fix.go:229] Guest: 2024-07-17 18:40:56.959508652 +0000 UTC Remote: 2024-07-17 18:40:56.872354453 +0000 UTC m=+348.896679896 (delta=87.154199ms)
	I0717 18:40:56.985939   80180 fix.go:200] guest clock delta is within tolerance: 87.154199ms
	I0717 18:40:56.985944   80180 start.go:83] releasing machines lock for "embed-certs-527415", held for 18.288718042s
	I0717 18:40:56.985964   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.986210   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:56.988716   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.989086   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.989114   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.989279   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.989786   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.989966   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.990055   80180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:56.990092   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.990360   80180 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:56.990390   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.992519   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992816   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.992835   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992852   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992984   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.993162   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.993212   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.993234   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.993356   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.993401   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.993499   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.993541   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.993754   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.993915   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:57.116598   80180 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:57.122546   80180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:57.268379   80180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:57.274748   80180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:57.274819   80180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:57.290374   80180 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:57.290394   80180 start.go:495] detecting cgroup driver to use...
	I0717 18:40:57.290443   80180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:57.307521   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:57.323478   80180 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:57.323554   80180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:57.337078   80180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:57.350181   80180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:57.463512   80180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:57.626650   80180 docker.go:233] disabling docker service ...
	I0717 18:40:57.626714   80180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:57.641067   80180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:57.655085   80180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:57.802789   80180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:57.919140   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:57.932620   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:57.949471   80180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:40:57.949528   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.960297   80180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:57.960366   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.970890   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.980768   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.990723   80180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:58.000791   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.010332   80180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.026611   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.036106   80180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:58.044742   80180 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:58.044791   80180 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:58.056584   80180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:58.065470   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:58.182119   80180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:58.319330   80180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:58.319400   80180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:58.326361   80180 start.go:563] Will wait 60s for crictl version
	I0717 18:40:58.326405   80180 ssh_runner.go:195] Run: which crictl
	I0717 18:40:58.329951   80180 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:58.366561   80180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:58.366668   80180 ssh_runner.go:195] Run: crio --version
	I0717 18:40:58.398483   80180 ssh_runner.go:195] Run: crio --version
	I0717 18:40:58.427421   80180 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:40:56.324834   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:58.325283   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:56.077315   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:58.077815   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:55.259450   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.759932   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.259395   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.759855   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.259739   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.759436   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.258951   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.759931   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.259588   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.759651   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.428872   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:58.431182   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:58.431554   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:58.431580   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:58.431756   80180 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:58.435914   80180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:58.448777   80180 kubeadm.go:883] updating cluster {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:58.448923   80180 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:40:58.449018   80180 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:58.488011   80180 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:40:58.488077   80180 ssh_runner.go:195] Run: which lz4
	I0717 18:40:58.491828   80180 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:58.495609   80180 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:58.495640   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:40:59.686445   80180 crio.go:462] duration metric: took 1.194619366s to copy over tarball
	I0717 18:40:59.686513   80180 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:41:01.862679   80180 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.176132338s)
	I0717 18:41:01.862710   80180 crio.go:469] duration metric: took 2.176236509s to extract the tarball
	I0717 18:41:01.862719   80180 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:41:01.901813   80180 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:41:01.945403   80180 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:41:01.945429   80180 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:41:01.945438   80180 kubeadm.go:934] updating node { 192.168.61.90 8443 v1.30.2 crio true true} ...
	I0717 18:41:01.945554   80180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-527415 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:41:01.945631   80180 ssh_runner.go:195] Run: crio config
	I0717 18:41:01.991102   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:41:01.991130   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:41:01.991144   80180 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:41:01.991168   80180 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.90 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-527415 NodeName:embed-certs-527415 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:41:01.991331   80180 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-527415"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:41:01.991397   80180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:41:02.001007   80180 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:41:02.001082   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:41:02.010130   80180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0717 18:41:02.025405   80180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:41:02.041167   80180 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0717 18:41:02.057441   80180 ssh_runner.go:195] Run: grep 192.168.61.90	control-plane.minikube.internal$ /etc/hosts
	I0717 18:41:02.060878   80180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:41:02.072984   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:41:02.188194   80180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:41:02.204599   80180 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415 for IP: 192.168.61.90
	I0717 18:41:02.204623   80180 certs.go:194] generating shared ca certs ...
	I0717 18:41:02.204643   80180 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:41:02.204822   80180 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:41:02.204885   80180 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:41:02.204899   80180 certs.go:256] generating profile certs ...
	I0717 18:41:02.205047   80180 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.key
	I0717 18:41:02.205129   80180 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9
	I0717 18:41:02.205188   80180 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key
	I0717 18:41:02.205372   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:41:02.205436   80180 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:41:02.205451   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:41:02.205486   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:41:02.205526   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:41:02.205556   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:41:02.205612   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:41:02.206441   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:41:02.234135   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:41:02.259780   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:41:02.285464   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:41:02.316267   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 18:41:02.348835   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:41:02.375505   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:41:02.402683   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:41:02.426689   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:41:02.449328   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:41:02.472140   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:41:02.494016   80180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:41:02.512612   80180 ssh_runner.go:195] Run: openssl version
	I0717 18:41:02.519908   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:41:02.532706   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.538136   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.538191   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.545493   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:41:02.558832   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:41:02.570455   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.575515   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.575582   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.581428   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:41:02.592439   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:41:02.602823   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.608370   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.608433   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.615367   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:41:02.628355   80180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:41:02.632772   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:41:02.638325   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:41:02.643635   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:41:02.648960   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:41:02.654088   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:41:02.659220   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:41:02.664325   80180 kubeadm.go:392] StartCluster: {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:41:02.664444   80180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:41:02.664495   80180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:41:02.699590   80180 cri.go:89] found id: ""
	I0717 18:41:02.699676   80180 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:41:02.709427   80180 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:41:02.709452   80180 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:41:02.709503   80180 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:41:02.718489   80180 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:41:02.719505   80180 kubeconfig.go:125] found "embed-certs-527415" server: "https://192.168.61.90:8443"
	I0717 18:41:02.721457   80180 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:41:02.730258   80180 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.90
	I0717 18:41:02.730288   80180 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:41:02.730301   80180 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:41:02.730367   80180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:41:02.768268   80180 cri.go:89] found id: ""
	I0717 18:41:02.768339   80180 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:41:02.786699   80180 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:41:02.796888   80180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:41:02.796912   80180 kubeadm.go:157] found existing configuration files:
	
	I0717 18:41:02.796965   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:41:02.805633   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:41:02.805703   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:41:02.817624   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:41:02.827840   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:41:02.827902   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:41:02.836207   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:41:02.844201   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:41:02.844265   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:41:02.852667   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:41:02.860697   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:41:02.860741   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:41:02.869133   80180 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:41:02.877992   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:02.986350   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:00.823447   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:02.825375   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:00.578095   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:02.576899   81068 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.576927   81068 pod_ready.go:81] duration metric: took 10.506835962s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.576953   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hj7ss" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.584912   81068 pod_ready.go:92] pod "kube-proxy-hj7ss" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.584933   81068 pod_ready.go:81] duration metric: took 7.972079ms for pod "kube-proxy-hj7ss" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.584964   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.590342   81068 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.590366   81068 pod_ready.go:81] duration metric: took 5.392364ms for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.590380   81068 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:00.259461   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:00.759148   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.259596   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.759943   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.259670   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.759900   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.259745   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.759843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.259902   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.759850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.874112   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.091026   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.170734   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.292719   80180 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:41:04.292826   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.793710   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.292924   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.792872   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.293626   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.793632   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.810658   80180 api_server.go:72] duration metric: took 2.517938682s to wait for apiserver process to appear ...
	I0717 18:41:06.810685   80180 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:41:06.810705   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:05.323684   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:07.324653   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:04.596794   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:06.597411   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:09.097409   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:05.259624   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.759258   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.259346   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.759041   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.259467   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.759164   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.259047   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.759959   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.259372   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.759259   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.612683   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:41:09.612715   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:41:09.612728   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:09.633949   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:41:09.633975   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:41:09.811272   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:09.815690   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:09.815720   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:10.311256   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:10.319587   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:10.319620   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:10.811133   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:10.815819   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:10.815862   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:11.311037   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:11.315892   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:11.315923   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:11.811534   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:11.816601   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:11.816631   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:12.311178   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:12.315484   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:12.315510   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:12.811068   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:12.821016   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:12.821048   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:13.311166   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:13.315879   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:41:13.322661   80180 api_server.go:141] control plane version: v1.30.2
	I0717 18:41:13.322700   80180 api_server.go:131] duration metric: took 6.512007091s to wait for apiserver health ...
	I0717 18:41:13.322713   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:41:13.322722   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:41:13.324516   80180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:41:09.325535   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:11.325697   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:13.327238   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:11.597479   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:14.098908   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:10.259845   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:10.759671   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.259895   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.759877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.259003   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.759685   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.759844   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.259541   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.759709   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.325935   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:41:13.337601   80180 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:41:13.354366   80180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:41:13.364678   80180 system_pods.go:59] 8 kube-system pods found
	I0717 18:41:13.364715   80180 system_pods.go:61] "coredns-7db6d8ff4d-2fnlb" [86d50e9b-fb88-4332-90c5-a969b0654635] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:41:13.364726   80180 system_pods.go:61] "etcd-embed-certs-527415" [9d8ac0a8-4639-48d8-8ac4-88b0bd1e2082] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:41:13.364735   80180 system_pods.go:61] "kube-apiserver-embed-certs-527415" [7f72c4f9-f1db-4ac6-83e1-2b94245107c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:41:13.364743   80180 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [96081a97-2a90-4fec-84cb-9a399a43aeb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:41:13.364752   80180 system_pods.go:61] "kube-proxy-jltfs" [27f6259e-80cc-4881-bb06-6a2ad529179c] Running
	I0717 18:41:13.364763   80180 system_pods.go:61] "kube-scheduler-embed-certs-527415" [bed7b515-7ab0-460c-a13f-037f29576f30] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:41:13.364775   80180 system_pods.go:61] "metrics-server-569cc877fc-8md44" [1b9d50c8-6ca0-41c3-92d9-eebdccbf1a82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:41:13.364783   80180 system_pods.go:61] "storage-provisioner" [ccb34b69-d28d-477e-8c7a-0acdc547bec7] Running
	I0717 18:41:13.364791   80180 system_pods.go:74] duration metric: took 10.40947ms to wait for pod list to return data ...
	I0717 18:41:13.364803   80180 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:41:13.367687   80180 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:41:13.367712   80180 node_conditions.go:123] node cpu capacity is 2
	I0717 18:41:13.367725   80180 node_conditions.go:105] duration metric: took 2.912986ms to run NodePressure ...
	I0717 18:41:13.367745   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:13.630827   80180 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:41:13.636658   80180 kubeadm.go:739] kubelet initialised
	I0717 18:41:13.636688   80180 kubeadm.go:740] duration metric: took 5.830484ms waiting for restarted kubelet to initialise ...
	I0717 18:41:13.636699   80180 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:41:13.642171   80180 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.650539   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.650573   80180 pod_ready.go:81] duration metric: took 8.374432ms for pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.650585   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.650599   80180 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.655470   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "etcd-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.655500   80180 pod_ready.go:81] duration metric: took 4.8911ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.655512   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "etcd-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.655520   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.662448   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.662479   80180 pod_ready.go:81] duration metric: took 6.949002ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.662490   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.662499   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.757454   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.757485   80180 pod_ready.go:81] duration metric: took 94.976348ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.757494   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.757501   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:14.157339   80180 pod_ready.go:92] pod "kube-proxy-jltfs" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:14.157363   80180 pod_ready.go:81] duration metric: took 399.852649ms for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:14.157381   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:16.163623   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:15.825045   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:18.323440   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:16.596320   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:18.596807   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:15.259558   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:15.759585   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.259850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.760009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.259385   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.759208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.259218   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.759779   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.259666   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.759781   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.174371   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:20.664423   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:22.663932   80180 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:22.663955   80180 pod_ready.go:81] duration metric: took 8.506565077s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:22.663969   80180 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:20.324547   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:22.824318   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:21.096071   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:23.596775   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:20.259286   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:20.759048   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.259801   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.759595   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.259582   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.759871   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.259349   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.759659   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.259964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.759899   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.671105   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:27.170247   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:24.825017   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:26.825067   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:26.096196   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:28.097501   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:25.259559   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:25.759773   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.759924   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.259509   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.759986   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.259792   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.759564   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:29.259060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:29.259143   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:29.298974   80857 cri.go:89] found id: ""
	I0717 18:41:29.299006   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.299016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:29.299024   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:29.299087   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:29.333764   80857 cri.go:89] found id: ""
	I0717 18:41:29.333786   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.333793   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:29.333801   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:29.333849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:29.369639   80857 cri.go:89] found id: ""
	I0717 18:41:29.369674   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.369688   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:29.369697   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:29.369762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:29.403453   80857 cri.go:89] found id: ""
	I0717 18:41:29.403481   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.403489   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:29.403498   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:29.403555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:29.436662   80857 cri.go:89] found id: ""
	I0717 18:41:29.436687   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.436695   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:29.436701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:29.436749   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:29.471013   80857 cri.go:89] found id: ""
	I0717 18:41:29.471053   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.471064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:29.471074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:29.471139   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:29.502754   80857 cri.go:89] found id: ""
	I0717 18:41:29.502780   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.502787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:29.502793   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:29.502842   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:29.534205   80857 cri.go:89] found id: ""
	I0717 18:41:29.534232   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.534239   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:29.534247   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:29.534259   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:29.585406   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:29.585438   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:29.600629   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:29.600660   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:29.719788   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:29.719807   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:29.719819   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:29.785626   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:29.785662   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:29.669918   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:31.670544   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:29.325013   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:31.828532   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:30.097685   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:32.596760   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:32.325522   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:32.338046   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:32.338120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:32.370073   80857 cri.go:89] found id: ""
	I0717 18:41:32.370099   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.370106   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:32.370112   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:32.370165   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:32.408764   80857 cri.go:89] found id: ""
	I0717 18:41:32.408789   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.408799   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:32.408806   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:32.408862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:32.449078   80857 cri.go:89] found id: ""
	I0717 18:41:32.449108   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.449118   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:32.449125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:32.449176   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:32.481990   80857 cri.go:89] found id: ""
	I0717 18:41:32.482015   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.482022   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:32.482028   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:32.482077   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:32.521902   80857 cri.go:89] found id: ""
	I0717 18:41:32.521932   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.521942   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:32.521949   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:32.521997   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:32.554148   80857 cri.go:89] found id: ""
	I0717 18:41:32.554177   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.554206   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:32.554216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:32.554270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:32.587342   80857 cri.go:89] found id: ""
	I0717 18:41:32.587366   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.587374   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:32.587379   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:32.587425   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:32.619227   80857 cri.go:89] found id: ""
	I0717 18:41:32.619259   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.619270   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:32.619281   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:32.619296   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:32.669085   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:32.669124   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:32.682464   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:32.682500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:32.749218   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:32.749234   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:32.749245   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:32.814510   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:32.814545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:33.670578   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.670952   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:37.671373   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:34.324458   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:36.823615   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:38.825194   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.096041   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:37.096436   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:39.096906   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.362866   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:35.375563   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:35.375643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:35.412355   80857 cri.go:89] found id: ""
	I0717 18:41:35.412380   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.412388   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:35.412393   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:35.412439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:35.446596   80857 cri.go:89] found id: ""
	I0717 18:41:35.446621   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.446629   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:35.446634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:35.446691   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:35.481695   80857 cri.go:89] found id: ""
	I0717 18:41:35.481717   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.481725   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:35.481730   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:35.481783   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:35.514528   80857 cri.go:89] found id: ""
	I0717 18:41:35.514573   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.514584   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:35.514592   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:35.514657   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:35.547831   80857 cri.go:89] found id: ""
	I0717 18:41:35.547858   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.547871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:35.547879   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:35.547941   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:35.579059   80857 cri.go:89] found id: ""
	I0717 18:41:35.579084   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.579097   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:35.579104   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:35.579164   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:35.616442   80857 cri.go:89] found id: ""
	I0717 18:41:35.616480   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.616487   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:35.616492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:35.616545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:35.647535   80857 cri.go:89] found id: ""
	I0717 18:41:35.647564   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.647571   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:35.647579   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:35.647595   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:35.696664   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:35.696692   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:35.710474   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:35.710499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:35.785569   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:35.785595   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:35.785611   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:35.865750   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:35.865785   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:38.405391   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:38.417737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:38.417806   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:38.453848   80857 cri.go:89] found id: ""
	I0717 18:41:38.453877   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.453888   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:38.453895   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:38.453949   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:38.487083   80857 cri.go:89] found id: ""
	I0717 18:41:38.487112   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.487122   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:38.487129   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:38.487190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:38.517700   80857 cri.go:89] found id: ""
	I0717 18:41:38.517729   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.517738   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:38.517746   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:38.517808   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:38.547587   80857 cri.go:89] found id: ""
	I0717 18:41:38.547616   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.547625   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:38.547632   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:38.547780   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:38.581511   80857 cri.go:89] found id: ""
	I0717 18:41:38.581535   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.581542   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:38.581548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:38.581675   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:38.618308   80857 cri.go:89] found id: ""
	I0717 18:41:38.618327   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.618334   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:38.618340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:38.618401   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:38.658237   80857 cri.go:89] found id: ""
	I0717 18:41:38.658267   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.658278   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:38.658298   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:38.658359   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:38.694044   80857 cri.go:89] found id: ""
	I0717 18:41:38.694071   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.694080   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:38.694090   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:38.694106   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:38.746621   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:38.746658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:38.758781   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:38.758805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:38.827327   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:38.827345   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:38.827357   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:38.899731   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:38.899762   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:40.170106   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:42.170391   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:40.825940   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:43.327489   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:41.097668   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:43.597625   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:41.437479   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:41.451264   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:41.451336   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:41.489053   80857 cri.go:89] found id: ""
	I0717 18:41:41.489083   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.489093   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:41.489101   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:41.489162   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:41.521954   80857 cri.go:89] found id: ""
	I0717 18:41:41.521985   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.521996   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:41.522003   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:41.522068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:41.556847   80857 cri.go:89] found id: ""
	I0717 18:41:41.556875   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.556884   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:41.556893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:41.557024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:41.591232   80857 cri.go:89] found id: ""
	I0717 18:41:41.591255   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.591263   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:41.591269   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:41.591315   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:41.624533   80857 cri.go:89] found id: ""
	I0717 18:41:41.624565   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.624576   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:41.624583   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:41.624644   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:41.656033   80857 cri.go:89] found id: ""
	I0717 18:41:41.656063   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.656073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:41.656080   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:41.656140   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:41.691686   80857 cri.go:89] found id: ""
	I0717 18:41:41.691715   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.691725   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:41.691732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:41.691789   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:41.724688   80857 cri.go:89] found id: ""
	I0717 18:41:41.724718   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.724729   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:41.724741   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:41.724760   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:41.802855   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:41.802882   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:41.839242   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:41.839271   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:41.889028   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:41.889058   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:41.901598   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:41.901627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:41.972632   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.472824   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:44.487673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:44.487745   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:44.530173   80857 cri.go:89] found id: ""
	I0717 18:41:44.530204   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.530216   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:44.530224   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:44.530288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:44.577865   80857 cri.go:89] found id: ""
	I0717 18:41:44.577891   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.577899   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:44.577905   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:44.577967   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:44.621528   80857 cri.go:89] found id: ""
	I0717 18:41:44.621551   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.621559   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:44.621564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:44.621622   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:44.655456   80857 cri.go:89] found id: ""
	I0717 18:41:44.655488   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.655498   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:44.655505   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:44.655570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:44.688729   80857 cri.go:89] found id: ""
	I0717 18:41:44.688757   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.688767   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:44.688774   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:44.688832   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:44.720190   80857 cri.go:89] found id: ""
	I0717 18:41:44.720220   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.720231   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:44.720238   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:44.720294   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:44.750109   80857 cri.go:89] found id: ""
	I0717 18:41:44.750135   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.750142   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:44.750147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:44.750203   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:44.780039   80857 cri.go:89] found id: ""
	I0717 18:41:44.780066   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.780090   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:44.780098   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:44.780111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:44.829641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:44.829675   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:44.842587   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:44.842616   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:44.906331   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.906355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:44.906369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:44.983364   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:44.983400   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:44.671557   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:47.170565   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:45.827780   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:48.324627   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:46.096988   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:48.596469   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:47.525057   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:47.538586   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:47.538639   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:47.574805   80857 cri.go:89] found id: ""
	I0717 18:41:47.574832   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.574843   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:47.574849   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:47.574906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:47.609576   80857 cri.go:89] found id: ""
	I0717 18:41:47.609603   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.609611   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:47.609617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:47.609662   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:47.643899   80857 cri.go:89] found id: ""
	I0717 18:41:47.643927   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.643936   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:47.643941   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:47.643990   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:47.680365   80857 cri.go:89] found id: ""
	I0717 18:41:47.680404   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.680412   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:47.680418   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:47.680475   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:47.719038   80857 cri.go:89] found id: ""
	I0717 18:41:47.719061   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.719069   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:47.719074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:47.719118   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:47.751708   80857 cri.go:89] found id: ""
	I0717 18:41:47.751735   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.751744   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:47.751750   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:47.751807   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:47.789803   80857 cri.go:89] found id: ""
	I0717 18:41:47.789838   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.789850   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:47.789858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:47.789921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:47.821450   80857 cri.go:89] found id: ""
	I0717 18:41:47.821477   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.821487   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:47.821496   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:47.821511   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:47.886501   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:47.886526   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:47.886544   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:47.960142   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:47.960177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:47.995012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:47.995046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:48.046848   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:48.046884   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:49.670208   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:52.169471   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.824876   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:53.324628   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.597215   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:53.096114   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.560990   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:50.574906   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:50.575051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:50.607647   80857 cri.go:89] found id: ""
	I0717 18:41:50.607674   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.607687   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:50.607696   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:50.607756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:50.640621   80857 cri.go:89] found id: ""
	I0717 18:41:50.640651   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.640660   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:50.640667   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:50.640741   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:50.675269   80857 cri.go:89] found id: ""
	I0717 18:41:50.675293   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.675303   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:50.675313   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:50.675369   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:50.707915   80857 cri.go:89] found id: ""
	I0717 18:41:50.707938   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.707946   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:50.707951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:50.708006   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:50.741149   80857 cri.go:89] found id: ""
	I0717 18:41:50.741170   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.741178   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:50.741184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:50.741288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:50.772768   80857 cri.go:89] found id: ""
	I0717 18:41:50.772792   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.772799   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:50.772804   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:50.772854   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:50.804996   80857 cri.go:89] found id: ""
	I0717 18:41:50.805018   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.805028   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:50.805035   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:50.805094   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:50.838933   80857 cri.go:89] found id: ""
	I0717 18:41:50.838960   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.838971   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:50.838982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:50.838997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:50.886415   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:50.886444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:50.899024   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:50.899049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:50.965388   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:50.965416   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:50.965434   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:51.044449   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:51.044490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.580749   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:53.593759   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:53.593841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:53.626541   80857 cri.go:89] found id: ""
	I0717 18:41:53.626573   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.626582   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:53.626588   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:53.626645   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:53.658492   80857 cri.go:89] found id: ""
	I0717 18:41:53.658520   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.658529   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:53.658537   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:53.658600   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:53.694546   80857 cri.go:89] found id: ""
	I0717 18:41:53.694582   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.694590   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:53.694595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:53.694650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:53.727028   80857 cri.go:89] found id: ""
	I0717 18:41:53.727053   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.727061   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:53.727067   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:53.727129   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:53.762869   80857 cri.go:89] found id: ""
	I0717 18:41:53.762897   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.762906   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:53.762913   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:53.762976   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:53.794133   80857 cri.go:89] found id: ""
	I0717 18:41:53.794158   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.794166   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:53.794172   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:53.794225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:53.828432   80857 cri.go:89] found id: ""
	I0717 18:41:53.828463   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.828473   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:53.828484   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:53.828546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:53.863316   80857 cri.go:89] found id: ""
	I0717 18:41:53.863345   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.863353   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:53.863362   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:53.863384   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.897353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:53.897380   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:53.944213   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:53.944242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:53.957484   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:53.957509   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:54.025962   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:54.025992   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:54.026006   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:54.170642   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:56.672407   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:55.325017   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:57.823877   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:55.596492   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:58.096397   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:56.609502   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:56.621849   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:56.621913   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:56.657469   80857 cri.go:89] found id: ""
	I0717 18:41:56.657498   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.657510   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:56.657517   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:56.657579   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:56.691298   80857 cri.go:89] found id: ""
	I0717 18:41:56.691320   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.691327   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:56.691332   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:56.691386   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:56.723305   80857 cri.go:89] found id: ""
	I0717 18:41:56.723334   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.723344   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:56.723352   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:56.723417   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:56.755893   80857 cri.go:89] found id: ""
	I0717 18:41:56.755918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.755926   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:56.755931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:56.755982   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:56.787777   80857 cri.go:89] found id: ""
	I0717 18:41:56.787807   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.787819   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:56.787828   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:56.787894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:56.821126   80857 cri.go:89] found id: ""
	I0717 18:41:56.821152   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.821163   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:56.821170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:56.821228   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:56.855894   80857 cri.go:89] found id: ""
	I0717 18:41:56.855918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.855926   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:56.855931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:56.855980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:56.893483   80857 cri.go:89] found id: ""
	I0717 18:41:56.893505   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.893512   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:56.893521   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:56.893532   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:56.945355   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:56.945385   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:56.958426   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:56.958451   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:57.025542   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:57.025571   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:57.025585   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:57.100497   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:57.100528   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:59.636400   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:59.648517   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:59.648571   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:59.683954   80857 cri.go:89] found id: ""
	I0717 18:41:59.683978   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.683988   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:59.683995   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:59.684065   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:59.719135   80857 cri.go:89] found id: ""
	I0717 18:41:59.719162   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.719172   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:59.719179   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:59.719243   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:59.755980   80857 cri.go:89] found id: ""
	I0717 18:41:59.756012   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.756023   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:59.756030   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:59.756091   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:59.788147   80857 cri.go:89] found id: ""
	I0717 18:41:59.788176   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.788185   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:59.788191   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:59.788239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:59.819646   80857 cri.go:89] found id: ""
	I0717 18:41:59.819670   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.819679   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:59.819685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:59.819738   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:59.852487   80857 cri.go:89] found id: ""
	I0717 18:41:59.852508   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.852516   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:59.852521   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:59.852586   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:59.883761   80857 cri.go:89] found id: ""
	I0717 18:41:59.883794   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.883805   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:59.883812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:59.883870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:59.914854   80857 cri.go:89] found id: ""
	I0717 18:41:59.914882   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.914889   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:59.914896   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:59.914909   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:59.995619   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:59.995650   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:00.034444   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:00.034472   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:59.172253   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:01.670422   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:59.824347   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:01.824444   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:03.826580   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:00.096457   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:02.596587   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:00.084278   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:00.084308   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:00.097771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:00.097796   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:00.161753   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:02.662134   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:02.676200   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:02.676277   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:02.711606   80857 cri.go:89] found id: ""
	I0717 18:42:02.711640   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.711652   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:02.711659   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:02.711711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:02.744704   80857 cri.go:89] found id: ""
	I0717 18:42:02.744728   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.744735   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:02.744741   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:02.744800   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:02.778815   80857 cri.go:89] found id: ""
	I0717 18:42:02.778846   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.778859   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:02.778868   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:02.778936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:02.810896   80857 cri.go:89] found id: ""
	I0717 18:42:02.810928   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.810941   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:02.810950   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:02.811024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:02.843868   80857 cri.go:89] found id: ""
	I0717 18:42:02.843892   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.843903   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:02.843910   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:02.843972   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:02.876311   80857 cri.go:89] found id: ""
	I0717 18:42:02.876338   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.876348   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:02.876356   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:02.876420   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:02.910752   80857 cri.go:89] found id: ""
	I0717 18:42:02.910776   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.910784   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:02.910789   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:02.910835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:02.947286   80857 cri.go:89] found id: ""
	I0717 18:42:02.947318   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.947328   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:02.947337   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:02.947351   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:02.999512   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:02.999542   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:03.014063   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:03.014094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:03.081822   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:03.081844   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:03.081858   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:03.161088   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:03.161117   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:04.171168   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:06.669508   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:06.324608   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:08.825084   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:04.597129   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:07.098716   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:05.699198   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:05.711597   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:05.711654   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:05.749653   80857 cri.go:89] found id: ""
	I0717 18:42:05.749684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.749694   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:05.749703   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:05.749757   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:05.785095   80857 cri.go:89] found id: ""
	I0717 18:42:05.785118   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.785125   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:05.785134   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:05.785179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:05.818085   80857 cri.go:89] found id: ""
	I0717 18:42:05.818111   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.818119   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:05.818125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:05.818171   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:05.851872   80857 cri.go:89] found id: ""
	I0717 18:42:05.851895   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.851902   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:05.851907   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:05.851958   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:05.883924   80857 cri.go:89] found id: ""
	I0717 18:42:05.883948   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.883958   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:05.883965   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:05.884025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:05.916365   80857 cri.go:89] found id: ""
	I0717 18:42:05.916396   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.916407   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:05.916414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:05.916473   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:05.950656   80857 cri.go:89] found id: ""
	I0717 18:42:05.950684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.950695   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:05.950701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:05.950762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:05.992132   80857 cri.go:89] found id: ""
	I0717 18:42:05.992160   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.992169   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:05.992177   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:05.992190   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:06.042162   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:06.042192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:06.055594   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:06.055619   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:06.123007   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:06.123038   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:06.123068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:06.200429   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:06.200460   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:08.739039   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:08.751520   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:08.751575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:08.783765   80857 cri.go:89] found id: ""
	I0717 18:42:08.783794   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.783805   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:08.783812   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:08.783864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:08.815200   80857 cri.go:89] found id: ""
	I0717 18:42:08.815227   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.815236   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:08.815242   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:08.815289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:08.848970   80857 cri.go:89] found id: ""
	I0717 18:42:08.849002   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.849012   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:08.849021   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:08.849084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:08.881832   80857 cri.go:89] found id: ""
	I0717 18:42:08.881859   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.881866   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:08.881874   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:08.881922   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:08.913119   80857 cri.go:89] found id: ""
	I0717 18:42:08.913142   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.913149   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:08.913155   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:08.913201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:08.947471   80857 cri.go:89] found id: ""
	I0717 18:42:08.947499   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.947509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:08.947515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:08.947570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:08.979570   80857 cri.go:89] found id: ""
	I0717 18:42:08.979599   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.979609   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:08.979615   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:08.979670   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:09.012960   80857 cri.go:89] found id: ""
	I0717 18:42:09.012991   80857 logs.go:276] 0 containers: []
	W0717 18:42:09.013002   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:09.013012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:09.013027   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:09.065732   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:09.065769   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:09.079572   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:09.079602   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:09.151737   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:09.151754   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:09.151766   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:09.230185   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:09.230218   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:08.670185   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:10.671336   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.325340   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:13.824087   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:09.595757   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.596784   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:14.096765   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.767189   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:11.780044   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:11.780115   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:11.812700   80857 cri.go:89] found id: ""
	I0717 18:42:11.812722   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.812730   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:11.812736   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:11.812781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:11.846855   80857 cri.go:89] found id: ""
	I0717 18:42:11.846883   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.846893   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:11.846900   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:11.846962   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:11.877671   80857 cri.go:89] found id: ""
	I0717 18:42:11.877700   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.877710   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:11.877716   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:11.877767   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:11.908703   80857 cri.go:89] found id: ""
	I0717 18:42:11.908728   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.908735   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:11.908740   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:11.908786   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:11.942191   80857 cri.go:89] found id: ""
	I0717 18:42:11.942218   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.942225   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:11.942231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:11.942284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:11.974751   80857 cri.go:89] found id: ""
	I0717 18:42:11.974782   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.974798   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:11.974807   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:11.974876   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:12.006287   80857 cri.go:89] found id: ""
	I0717 18:42:12.006317   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.006327   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:12.006335   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:12.006396   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:12.036524   80857 cri.go:89] found id: ""
	I0717 18:42:12.036546   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.036554   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:12.036575   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:12.036599   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:12.085073   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:12.085109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:12.098908   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:12.098937   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:12.161665   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:12.161687   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:12.161702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:12.240349   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:12.240401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:14.781101   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:14.794081   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:14.794149   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:14.828975   80857 cri.go:89] found id: ""
	I0717 18:42:14.829003   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.829013   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:14.829021   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:14.829072   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:14.864858   80857 cri.go:89] found id: ""
	I0717 18:42:14.864886   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.864896   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:14.864903   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:14.864986   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:14.897961   80857 cri.go:89] found id: ""
	I0717 18:42:14.897983   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.897991   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:14.897996   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:14.898041   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:14.935499   80857 cri.go:89] found id: ""
	I0717 18:42:14.935521   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.935529   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:14.935534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:14.935591   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:14.967581   80857 cri.go:89] found id: ""
	I0717 18:42:14.967605   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.967621   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:14.967629   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:14.967688   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:15.001844   80857 cri.go:89] found id: ""
	I0717 18:42:15.001876   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.001888   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:15.001894   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:15.001942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:15.038940   80857 cri.go:89] found id: ""
	I0717 18:42:15.038967   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.038977   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:15.038985   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:15.039043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:13.170111   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:15.669712   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:17.669916   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:16.325511   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:18.823820   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:16.597587   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:19.096905   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:15.072636   80857 cri.go:89] found id: ""
	I0717 18:42:15.072665   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.072677   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:15.072688   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:15.072703   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:15.124889   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:15.124934   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:15.138661   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:15.138691   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:15.208762   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:15.208791   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:15.208806   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:15.281302   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:15.281336   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:17.817136   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:17.831013   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:17.831078   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:17.867065   80857 cri.go:89] found id: ""
	I0717 18:42:17.867091   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.867101   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:17.867108   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:17.867166   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:17.904143   80857 cri.go:89] found id: ""
	I0717 18:42:17.904171   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.904180   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:17.904188   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:17.904248   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:17.937450   80857 cri.go:89] found id: ""
	I0717 18:42:17.937478   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.937487   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:17.937492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:17.937556   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:17.970650   80857 cri.go:89] found id: ""
	I0717 18:42:17.970679   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.970689   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:17.970696   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:17.970754   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:18.002329   80857 cri.go:89] found id: ""
	I0717 18:42:18.002355   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.002364   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:18.002371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:18.002430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:18.035253   80857 cri.go:89] found id: ""
	I0717 18:42:18.035278   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.035288   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:18.035295   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:18.035356   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:18.070386   80857 cri.go:89] found id: ""
	I0717 18:42:18.070419   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.070431   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:18.070439   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:18.070507   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:18.106148   80857 cri.go:89] found id: ""
	I0717 18:42:18.106170   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.106177   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:18.106185   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:18.106201   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:18.157359   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:18.157390   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:18.171757   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:18.171782   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:18.242795   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:18.242818   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:18.242831   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:18.316221   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:18.316255   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:19.670562   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:22.171111   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:20.824266   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:22.824366   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:21.596773   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:24.098051   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:20.857953   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:20.870813   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:20.870882   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:20.906033   80857 cri.go:89] found id: ""
	I0717 18:42:20.906065   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.906075   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:20.906083   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:20.906142   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:20.942292   80857 cri.go:89] found id: ""
	I0717 18:42:20.942316   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.942335   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:20.942342   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:20.942403   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:20.985113   80857 cri.go:89] found id: ""
	I0717 18:42:20.985143   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.985151   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:20.985157   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:20.985217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:21.021807   80857 cri.go:89] found id: ""
	I0717 18:42:21.021834   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.021842   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:21.021847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:21.021906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:21.061924   80857 cri.go:89] found id: ""
	I0717 18:42:21.061949   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.061961   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:21.061969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:21.062025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:21.098890   80857 cri.go:89] found id: ""
	I0717 18:42:21.098916   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.098927   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:21.098935   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:21.098991   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:21.132576   80857 cri.go:89] found id: ""
	I0717 18:42:21.132612   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.132621   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:21.132627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:21.132687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:21.167723   80857 cri.go:89] found id: ""
	I0717 18:42:21.167765   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.167778   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:21.167788   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:21.167803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:21.220427   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:21.220461   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:21.233191   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:21.233216   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:21.304462   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:21.304481   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:21.304498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:21.386887   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:21.386925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:23.926518   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:23.940470   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:23.940534   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:23.976739   80857 cri.go:89] found id: ""
	I0717 18:42:23.976763   80857 logs.go:276] 0 containers: []
	W0717 18:42:23.976773   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:23.976778   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:23.976838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:24.007575   80857 cri.go:89] found id: ""
	I0717 18:42:24.007603   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.007612   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:24.007617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:24.007671   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:24.040430   80857 cri.go:89] found id: ""
	I0717 18:42:24.040455   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.040463   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:24.040468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:24.040581   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:24.071602   80857 cri.go:89] found id: ""
	I0717 18:42:24.071629   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.071638   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:24.071644   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:24.071705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:24.109570   80857 cri.go:89] found id: ""
	I0717 18:42:24.109595   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.109602   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:24.109607   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:24.109667   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:24.144284   80857 cri.go:89] found id: ""
	I0717 18:42:24.144305   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.144328   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:24.144333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:24.144382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:24.179441   80857 cri.go:89] found id: ""
	I0717 18:42:24.179467   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.179474   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:24.179479   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:24.179545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:24.222100   80857 cri.go:89] found id: ""
	I0717 18:42:24.222133   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.222143   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:24.222159   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:24.222175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:24.273181   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:24.273215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:24.285835   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:24.285861   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:24.357804   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:24.357826   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:24.357839   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:24.437270   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:24.437310   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:24.670033   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.671014   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:24.824543   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:27.325296   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.597795   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:29.098055   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.979543   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:26.992443   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:26.992497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:27.025520   80857 cri.go:89] found id: ""
	I0717 18:42:27.025548   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.025560   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:27.025567   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:27.025630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:27.059971   80857 cri.go:89] found id: ""
	I0717 18:42:27.060002   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.060011   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:27.060016   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:27.060068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:27.091370   80857 cri.go:89] found id: ""
	I0717 18:42:27.091397   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.091407   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:27.091415   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:27.091468   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:27.123736   80857 cri.go:89] found id: ""
	I0717 18:42:27.123768   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.123779   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:27.123786   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:27.123849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:27.156155   80857 cri.go:89] found id: ""
	I0717 18:42:27.156177   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.156185   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:27.156190   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:27.156239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:27.190701   80857 cri.go:89] found id: ""
	I0717 18:42:27.190729   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.190741   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:27.190749   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:27.190825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:27.222093   80857 cri.go:89] found id: ""
	I0717 18:42:27.222119   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.222130   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:27.222137   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:27.222199   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:27.258789   80857 cri.go:89] found id: ""
	I0717 18:42:27.258813   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.258824   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:27.258834   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:27.258848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:27.307033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:27.307068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:27.321181   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:27.321209   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:27.390560   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:27.390593   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:27.390613   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:27.464352   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:27.464389   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:30.005732   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:30.019088   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:30.019160   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:29.170578   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.670221   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:29.327610   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.824292   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:33.824392   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.595937   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:33.597622   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:30.052733   80857 cri.go:89] found id: ""
	I0717 18:42:30.052757   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.052765   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:30.052775   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:30.052836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:30.087683   80857 cri.go:89] found id: ""
	I0717 18:42:30.087711   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.087722   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:30.087729   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:30.087774   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:30.124371   80857 cri.go:89] found id: ""
	I0717 18:42:30.124404   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.124416   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:30.124432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:30.124487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:30.160081   80857 cri.go:89] found id: ""
	I0717 18:42:30.160107   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.160115   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:30.160122   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:30.160173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:30.194420   80857 cri.go:89] found id: ""
	I0717 18:42:30.194447   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.194456   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:30.194464   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:30.194522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:30.229544   80857 cri.go:89] found id: ""
	I0717 18:42:30.229570   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.229584   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:30.229591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:30.229650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:30.264164   80857 cri.go:89] found id: ""
	I0717 18:42:30.264193   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.264204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:30.264211   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:30.264266   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:30.296958   80857 cri.go:89] found id: ""
	I0717 18:42:30.296986   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.296996   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:30.297008   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:30.297049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:30.348116   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:30.348145   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:30.361373   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:30.361401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:30.429601   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:30.429620   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:30.429634   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:30.507718   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:30.507752   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:33.045539   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:33.058149   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:33.058219   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:33.088675   80857 cri.go:89] found id: ""
	I0717 18:42:33.088702   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.088710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:33.088717   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:33.088773   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:33.121269   80857 cri.go:89] found id: ""
	I0717 18:42:33.121297   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.121308   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:33.121315   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:33.121375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:33.156144   80857 cri.go:89] found id: ""
	I0717 18:42:33.156173   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.156184   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:33.156192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:33.156257   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:33.188559   80857 cri.go:89] found id: ""
	I0717 18:42:33.188585   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.188597   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:33.188603   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:33.188651   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:33.219650   80857 cri.go:89] found id: ""
	I0717 18:42:33.219672   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.219680   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:33.219686   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:33.219746   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:33.249704   80857 cri.go:89] found id: ""
	I0717 18:42:33.249728   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.249737   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:33.249742   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:33.249793   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:33.283480   80857 cri.go:89] found id: ""
	I0717 18:42:33.283503   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.283511   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:33.283516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:33.283560   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:33.314577   80857 cri.go:89] found id: ""
	I0717 18:42:33.314620   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.314629   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:33.314638   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:33.314649   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:33.363458   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:33.363491   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:33.377240   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:33.377267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:33.442939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:33.442961   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:33.442976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:33.522422   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:33.522456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:34.170638   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.171034   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.324780   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:38.824832   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.097788   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:38.596054   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.063823   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:36.078272   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:36.078342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:36.111460   80857 cri.go:89] found id: ""
	I0717 18:42:36.111494   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.111502   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:36.111509   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:36.111562   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:36.144191   80857 cri.go:89] found id: ""
	I0717 18:42:36.144222   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.144232   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:36.144239   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:36.144306   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:36.177247   80857 cri.go:89] found id: ""
	I0717 18:42:36.177277   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.177288   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:36.177294   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:36.177350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:36.213390   80857 cri.go:89] found id: ""
	I0717 18:42:36.213419   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.213427   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:36.213433   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:36.213493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:36.246775   80857 cri.go:89] found id: ""
	I0717 18:42:36.246799   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.246807   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:36.246812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:36.246870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:36.282441   80857 cri.go:89] found id: ""
	I0717 18:42:36.282463   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.282470   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:36.282476   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:36.282529   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:36.314178   80857 cri.go:89] found id: ""
	I0717 18:42:36.314203   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.314211   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:36.314216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:36.314265   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:36.353705   80857 cri.go:89] found id: ""
	I0717 18:42:36.353730   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.353737   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:36.353746   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:36.353758   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:36.370866   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:36.370894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:36.463660   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:36.463693   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:36.463710   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:36.540337   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:36.540371   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:36.575770   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:36.575801   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.128675   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:39.141187   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:39.141255   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:39.175960   80857 cri.go:89] found id: ""
	I0717 18:42:39.175982   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.175989   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:39.175994   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:39.176051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:39.209442   80857 cri.go:89] found id: ""
	I0717 18:42:39.209472   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.209483   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:39.209490   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:39.209552   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:39.243225   80857 cri.go:89] found id: ""
	I0717 18:42:39.243249   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.243256   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:39.243262   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:39.243309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:39.277369   80857 cri.go:89] found id: ""
	I0717 18:42:39.277396   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.277407   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:39.277414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:39.277464   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:39.310522   80857 cri.go:89] found id: ""
	I0717 18:42:39.310552   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.310563   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:39.310570   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:39.310637   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:39.344186   80857 cri.go:89] found id: ""
	I0717 18:42:39.344208   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.344216   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:39.344221   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:39.344279   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:39.375329   80857 cri.go:89] found id: ""
	I0717 18:42:39.375354   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.375366   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:39.375372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:39.375419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:39.412629   80857 cri.go:89] found id: ""
	I0717 18:42:39.412659   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.412668   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:39.412679   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:39.412696   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:39.447607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:39.447644   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.498981   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:39.499013   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:39.512380   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:39.512409   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:39.580396   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:39.580415   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:39.580428   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:38.670213   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:41.170284   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:40.825257   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:43.324155   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:40.596267   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:42.597199   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:42.158145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:42.177450   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:42.177522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:42.222849   80857 cri.go:89] found id: ""
	I0717 18:42:42.222880   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.222890   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:42.222897   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:42.222954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:42.252712   80857 cri.go:89] found id: ""
	I0717 18:42:42.252742   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.252752   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:42.252757   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:42.252802   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:42.283764   80857 cri.go:89] found id: ""
	I0717 18:42:42.283789   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.283799   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:42.283806   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:42.283864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:42.317243   80857 cri.go:89] found id: ""
	I0717 18:42:42.317270   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.317281   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:42.317288   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:42.317350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:42.349972   80857 cri.go:89] found id: ""
	I0717 18:42:42.350000   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.350010   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:42.350017   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:42.350074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:42.382111   80857 cri.go:89] found id: ""
	I0717 18:42:42.382146   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.382158   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:42.382165   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:42.382223   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:42.414669   80857 cri.go:89] found id: ""
	I0717 18:42:42.414692   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.414700   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:42.414705   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:42.414765   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:42.446533   80857 cri.go:89] found id: ""
	I0717 18:42:42.446571   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.446579   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:42.446588   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:42.446603   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:42.522142   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:42.522165   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:42.522177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:42.602456   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:42.602493   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:42.642192   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:42.642221   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:42.695016   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:42.695046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:43.170955   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.670631   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.325626   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:47.824543   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.097244   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:47.097783   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.208310   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:45.221821   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:45.221901   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:45.256887   80857 cri.go:89] found id: ""
	I0717 18:42:45.256914   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.256924   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:45.256930   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:45.256999   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:45.293713   80857 cri.go:89] found id: ""
	I0717 18:42:45.293735   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.293748   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:45.293753   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:45.293799   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:45.328790   80857 cri.go:89] found id: ""
	I0717 18:42:45.328815   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.328824   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:45.328833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:45.328880   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:45.364977   80857 cri.go:89] found id: ""
	I0717 18:42:45.365004   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.365014   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:45.365022   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:45.365084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:45.401131   80857 cri.go:89] found id: ""
	I0717 18:42:45.401157   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.401164   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:45.401170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:45.401217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:45.432252   80857 cri.go:89] found id: ""
	I0717 18:42:45.432279   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.432287   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:45.432293   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:45.432338   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:45.464636   80857 cri.go:89] found id: ""
	I0717 18:42:45.464659   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.464667   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:45.464674   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:45.464728   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:45.494884   80857 cri.go:89] found id: ""
	I0717 18:42:45.494913   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.494924   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:45.494935   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:45.494949   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:45.546578   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:45.546610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:45.559622   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:45.559647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:45.622094   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:45.622114   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:45.622126   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:45.699772   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:45.699814   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.241667   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:48.254205   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:48.254270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:48.293258   80857 cri.go:89] found id: ""
	I0717 18:42:48.293287   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.293298   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:48.293305   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:48.293362   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:48.328778   80857 cri.go:89] found id: ""
	I0717 18:42:48.328807   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.328818   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:48.328824   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:48.328884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:48.360230   80857 cri.go:89] found id: ""
	I0717 18:42:48.360256   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.360266   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:48.360276   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:48.360335   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:48.397770   80857 cri.go:89] found id: ""
	I0717 18:42:48.397797   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.397808   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:48.397815   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:48.397873   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:48.430912   80857 cri.go:89] found id: ""
	I0717 18:42:48.430938   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.430946   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:48.430956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:48.431015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:48.462659   80857 cri.go:89] found id: ""
	I0717 18:42:48.462688   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.462699   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:48.462706   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:48.462771   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:48.497554   80857 cri.go:89] found id: ""
	I0717 18:42:48.497584   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.497594   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:48.497601   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:48.497665   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:48.529524   80857 cri.go:89] found id: ""
	I0717 18:42:48.529547   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.529555   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:48.529564   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:48.529577   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:48.601265   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:48.601285   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:48.601297   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:48.678045   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:48.678075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.718565   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:48.718598   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:48.769923   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:48.769956   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:48.169777   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:50.669643   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.670334   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:50.324997   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.824163   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:49.596927   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.097602   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:51.282887   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:51.295778   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:51.295848   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:51.329324   80857 cri.go:89] found id: ""
	I0717 18:42:51.329351   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.329361   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:51.329369   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:51.329434   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:51.362013   80857 cri.go:89] found id: ""
	I0717 18:42:51.362042   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.362052   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:51.362059   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:51.362120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:51.395039   80857 cri.go:89] found id: ""
	I0717 18:42:51.395069   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.395080   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:51.395087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:51.395155   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:51.427683   80857 cri.go:89] found id: ""
	I0717 18:42:51.427709   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.427717   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:51.427722   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:51.427772   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:51.461683   80857 cri.go:89] found id: ""
	I0717 18:42:51.461706   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.461718   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:51.461723   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:51.461769   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:51.495780   80857 cri.go:89] found id: ""
	I0717 18:42:51.495802   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.495810   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:51.495816   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:51.495867   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:51.527541   80857 cri.go:89] found id: ""
	I0717 18:42:51.527573   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.527583   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:51.527591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:51.527648   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:51.567947   80857 cri.go:89] found id: ""
	I0717 18:42:51.567975   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.567987   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:51.567997   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:51.568014   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:51.620083   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:51.620109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:51.632823   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:51.632848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:51.705731   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:51.705753   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:51.705767   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:51.781969   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:51.782005   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.318011   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:54.331886   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:54.331942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:54.362935   80857 cri.go:89] found id: ""
	I0717 18:42:54.362962   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.362972   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:54.362979   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:54.363032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:54.396153   80857 cri.go:89] found id: ""
	I0717 18:42:54.396180   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.396191   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:54.396198   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:54.396259   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:54.433123   80857 cri.go:89] found id: ""
	I0717 18:42:54.433150   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.433160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:54.433168   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:54.433224   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:54.465034   80857 cri.go:89] found id: ""
	I0717 18:42:54.465064   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.465079   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:54.465087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:54.465200   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:54.496200   80857 cri.go:89] found id: ""
	I0717 18:42:54.496250   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.496263   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:54.496271   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:54.496332   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:54.528618   80857 cri.go:89] found id: ""
	I0717 18:42:54.528646   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.528656   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:54.528664   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:54.528724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:54.563018   80857 cri.go:89] found id: ""
	I0717 18:42:54.563042   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.563052   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:54.563059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:54.563114   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:54.595221   80857 cri.go:89] found id: ""
	I0717 18:42:54.595256   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.595266   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:54.595275   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:54.595291   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:54.608193   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:54.608220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:54.673755   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:54.673778   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:54.673793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:54.756443   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:54.756483   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.792670   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:54.792700   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:55.169224   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.169851   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:54.824614   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.324611   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:54.596824   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:56.597638   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:59.096992   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.344637   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:57.357003   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:57.357068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:57.389230   80857 cri.go:89] found id: ""
	I0717 18:42:57.389261   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.389271   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:57.389278   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:57.389372   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:57.421529   80857 cri.go:89] found id: ""
	I0717 18:42:57.421553   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.421571   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:57.421578   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:57.421642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:57.455154   80857 cri.go:89] found id: ""
	I0717 18:42:57.455186   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.455193   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:57.455199   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:57.455245   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:57.490576   80857 cri.go:89] found id: ""
	I0717 18:42:57.490608   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.490621   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:57.490630   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:57.490693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:57.523972   80857 cri.go:89] found id: ""
	I0717 18:42:57.524010   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.524023   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:57.524033   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:57.524092   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:57.558106   80857 cri.go:89] found id: ""
	I0717 18:42:57.558132   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.558140   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:57.558145   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:57.558201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:57.591009   80857 cri.go:89] found id: ""
	I0717 18:42:57.591035   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.591045   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:57.591051   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:57.591110   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:57.624564   80857 cri.go:89] found id: ""
	I0717 18:42:57.624592   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.624601   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:57.624612   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:57.624627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:57.699833   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:57.699868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:57.737029   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:57.737066   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:57.790562   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:57.790605   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:57.804935   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:57.804984   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:57.873081   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:59.170203   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.170348   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:59.325020   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.824876   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:03.825020   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.596885   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:03.597698   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:00.374166   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:00.388370   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:00.388443   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:00.421228   80857 cri.go:89] found id: ""
	I0717 18:43:00.421257   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.421268   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:00.421276   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:00.421325   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:00.451819   80857 cri.go:89] found id: ""
	I0717 18:43:00.451846   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.451856   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:00.451862   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:00.451917   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:00.482960   80857 cri.go:89] found id: ""
	I0717 18:43:00.482993   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.483004   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:00.483015   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:00.483074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:00.515860   80857 cri.go:89] found id: ""
	I0717 18:43:00.515882   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.515892   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:00.515899   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:00.515954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:00.548177   80857 cri.go:89] found id: ""
	I0717 18:43:00.548202   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.548212   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:00.548217   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:00.548275   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:00.580759   80857 cri.go:89] found id: ""
	I0717 18:43:00.580782   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.580790   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:00.580795   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:00.580847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:00.618661   80857 cri.go:89] found id: ""
	I0717 18:43:00.618683   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.618691   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:00.618699   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:00.618742   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:00.650503   80857 cri.go:89] found id: ""
	I0717 18:43:00.650528   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.650535   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:00.650544   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:00.650555   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:00.699668   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:00.699697   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:00.714086   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:00.714114   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:00.777051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:00.777087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:00.777105   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:00.859238   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:00.859274   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:03.399050   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:03.412565   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:03.412626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:03.445993   80857 cri.go:89] found id: ""
	I0717 18:43:03.446026   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.446038   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:03.446045   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:03.446101   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:03.481251   80857 cri.go:89] found id: ""
	I0717 18:43:03.481285   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.481297   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:03.481305   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:03.481371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:03.514406   80857 cri.go:89] found id: ""
	I0717 18:43:03.514433   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.514441   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:03.514447   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:03.514497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:03.546217   80857 cri.go:89] found id: ""
	I0717 18:43:03.546248   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.546258   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:03.546266   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:03.546327   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:03.577287   80857 cri.go:89] found id: ""
	I0717 18:43:03.577318   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.577333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:03.577340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:03.577394   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:03.610080   80857 cri.go:89] found id: ""
	I0717 18:43:03.610101   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.610109   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:03.610114   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:03.610159   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:03.643753   80857 cri.go:89] found id: ""
	I0717 18:43:03.643777   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.643787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:03.643792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:03.643849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:03.676290   80857 cri.go:89] found id: ""
	I0717 18:43:03.676338   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.676345   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:03.676353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:03.676364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:03.727818   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:03.727850   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:03.740752   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:03.740784   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:03.810465   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:03.810485   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:03.810499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:03.889326   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:03.889359   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:03.170473   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:05.170754   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:07.172145   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.323855   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:08.325019   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.096213   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:08.096443   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.426949   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:06.440007   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:06.440079   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:06.471689   80857 cri.go:89] found id: ""
	I0717 18:43:06.471715   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.471724   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:06.471729   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:06.471775   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:06.503818   80857 cri.go:89] found id: ""
	I0717 18:43:06.503840   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.503847   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:06.503853   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:06.503900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:06.534733   80857 cri.go:89] found id: ""
	I0717 18:43:06.534755   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.534763   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:06.534768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:06.534818   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:06.565388   80857 cri.go:89] found id: ""
	I0717 18:43:06.565414   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.565421   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:06.565431   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:06.565480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:06.597739   80857 cri.go:89] found id: ""
	I0717 18:43:06.597764   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.597775   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:06.597782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:06.597847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:06.629823   80857 cri.go:89] found id: ""
	I0717 18:43:06.629845   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.629853   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:06.629859   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:06.629921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:06.663753   80857 cri.go:89] found id: ""
	I0717 18:43:06.663779   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.663787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:06.663792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:06.663838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:06.700868   80857 cri.go:89] found id: ""
	I0717 18:43:06.700896   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.700906   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:06.700917   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:06.700932   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:06.753064   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:06.753097   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:06.765845   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:06.765868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:06.834691   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:06.834715   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:06.834729   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:06.908650   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:06.908682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:09.450804   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:09.463369   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:09.463452   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:09.506992   80857 cri.go:89] found id: ""
	I0717 18:43:09.507020   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.507028   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:09.507035   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:09.507093   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:09.543083   80857 cri.go:89] found id: ""
	I0717 18:43:09.543108   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.543116   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:09.543121   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:09.543174   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:09.576194   80857 cri.go:89] found id: ""
	I0717 18:43:09.576219   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.576226   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:09.576231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:09.576289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:09.610148   80857 cri.go:89] found id: ""
	I0717 18:43:09.610171   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.610178   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:09.610184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:09.610258   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:09.642217   80857 cri.go:89] found id: ""
	I0717 18:43:09.642246   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.642255   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:09.642263   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:09.642342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:09.678041   80857 cri.go:89] found id: ""
	I0717 18:43:09.678064   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.678073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:09.678079   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:09.678141   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:09.711162   80857 cri.go:89] found id: ""
	I0717 18:43:09.711193   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.711204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:09.711212   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:09.711272   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:09.746135   80857 cri.go:89] found id: ""
	I0717 18:43:09.746164   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.746175   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:09.746186   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:09.746197   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:09.799268   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:09.799303   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:09.811910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:09.811935   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:09.876939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:09.876982   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:09.876998   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:09.951468   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:09.951502   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:09.671086   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.170273   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:10.823628   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.824485   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:10.597216   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:13.096347   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.488926   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:12.501054   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:12.501112   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:12.532536   80857 cri.go:89] found id: ""
	I0717 18:43:12.532569   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.532577   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:12.532582   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:12.532629   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:12.565102   80857 cri.go:89] found id: ""
	I0717 18:43:12.565130   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.565141   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:12.565148   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:12.565208   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:12.600262   80857 cri.go:89] found id: ""
	I0717 18:43:12.600299   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.600309   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:12.600316   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:12.600366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:12.633950   80857 cri.go:89] found id: ""
	I0717 18:43:12.633980   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.633991   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:12.633998   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:12.634054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:12.673297   80857 cri.go:89] found id: ""
	I0717 18:43:12.673325   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.673338   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:12.673345   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:12.673406   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:12.707112   80857 cri.go:89] found id: ""
	I0717 18:43:12.707136   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.707144   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:12.707150   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:12.707206   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:12.746323   80857 cri.go:89] found id: ""
	I0717 18:43:12.746348   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.746358   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:12.746372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:12.746433   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:12.779470   80857 cri.go:89] found id: ""
	I0717 18:43:12.779496   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.779507   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:12.779518   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:12.779534   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:12.830156   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:12.830178   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:12.843707   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:12.843734   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:12.911849   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:12.911875   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:12.911891   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:12.986090   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:12.986122   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:14.170350   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:16.670284   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:14.824727   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:17.324146   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:15.096736   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:17.596689   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:15.523428   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:15.536012   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:15.536070   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:15.569179   80857 cri.go:89] found id: ""
	I0717 18:43:15.569208   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.569218   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:15.569225   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:15.569273   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:15.606727   80857 cri.go:89] found id: ""
	I0717 18:43:15.606749   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.606757   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:15.606763   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:15.606805   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:15.638842   80857 cri.go:89] found id: ""
	I0717 18:43:15.638873   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.638883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:15.638889   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:15.638939   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:15.671418   80857 cri.go:89] found id: ""
	I0717 18:43:15.671444   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.671453   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:15.671459   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:15.671517   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:15.704892   80857 cri.go:89] found id: ""
	I0717 18:43:15.704928   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.704937   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:15.704956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:15.705013   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:15.738478   80857 cri.go:89] found id: ""
	I0717 18:43:15.738502   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.738509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:15.738515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:15.738584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:15.771188   80857 cri.go:89] found id: ""
	I0717 18:43:15.771225   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.771237   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:15.771245   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:15.771303   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:15.807737   80857 cri.go:89] found id: ""
	I0717 18:43:15.807763   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.807770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:15.807779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:15.807790   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:15.861202   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:15.861234   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:15.874170   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:15.874200   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:15.938049   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:15.938073   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:15.938086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:16.025420   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:16.025456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:18.563320   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:18.575574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:18.575634   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:18.608673   80857 cri.go:89] found id: ""
	I0717 18:43:18.608700   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.608710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:18.608718   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:18.608782   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:18.641589   80857 cri.go:89] found id: ""
	I0717 18:43:18.641611   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.641618   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:18.641624   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:18.641679   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:18.672232   80857 cri.go:89] found id: ""
	I0717 18:43:18.672258   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.672268   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:18.672274   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:18.672331   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:18.706088   80857 cri.go:89] found id: ""
	I0717 18:43:18.706111   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.706118   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:18.706134   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:18.706179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:18.742475   80857 cri.go:89] found id: ""
	I0717 18:43:18.742503   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.742512   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:18.742518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:18.742575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:18.774141   80857 cri.go:89] found id: ""
	I0717 18:43:18.774169   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.774178   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:18.774183   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:18.774234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:18.806648   80857 cri.go:89] found id: ""
	I0717 18:43:18.806672   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.806679   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:18.806685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:18.806731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:18.838022   80857 cri.go:89] found id: ""
	I0717 18:43:18.838047   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.838054   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:18.838062   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:18.838076   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:18.903467   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:18.903487   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:18.903498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:18.980385   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:18.980432   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:19.020884   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:19.020914   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:19.073530   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:19.073574   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:19.169841   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.172793   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:19.824764   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.826081   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:20.095275   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:22.097120   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.587870   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:21.602130   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:21.602185   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:21.635373   80857 cri.go:89] found id: ""
	I0717 18:43:21.635401   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.635411   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:21.635418   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:21.635480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:21.667175   80857 cri.go:89] found id: ""
	I0717 18:43:21.667200   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.667209   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:21.667216   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:21.667267   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:21.705876   80857 cri.go:89] found id: ""
	I0717 18:43:21.705907   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.705918   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:21.705926   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:21.705988   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:21.753302   80857 cri.go:89] found id: ""
	I0717 18:43:21.753323   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.753330   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:21.753337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:21.753388   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:21.785363   80857 cri.go:89] found id: ""
	I0717 18:43:21.785390   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.785396   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:21.785402   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:21.785448   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:21.817517   80857 cri.go:89] found id: ""
	I0717 18:43:21.817545   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.817553   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:21.817560   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:21.817615   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:21.849451   80857 cri.go:89] found id: ""
	I0717 18:43:21.849478   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.849489   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:21.849497   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:21.849553   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:21.880032   80857 cri.go:89] found id: ""
	I0717 18:43:21.880055   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.880063   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:21.880073   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:21.880086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:21.928498   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:21.928530   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:21.941532   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:21.941565   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:22.014044   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:22.014066   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:22.014081   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:22.090789   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:22.090817   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:24.628401   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:24.643571   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:24.643642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:24.679262   80857 cri.go:89] found id: ""
	I0717 18:43:24.679288   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.679297   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:24.679303   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:24.679360   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:24.713043   80857 cri.go:89] found id: ""
	I0717 18:43:24.713073   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.713085   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:24.713092   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:24.713145   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:24.751459   80857 cri.go:89] found id: ""
	I0717 18:43:24.751496   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.751508   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:24.751518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:24.751584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:24.790793   80857 cri.go:89] found id: ""
	I0717 18:43:24.790820   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.790831   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:24.790838   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:24.790895   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:24.822909   80857 cri.go:89] found id: ""
	I0717 18:43:24.822936   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.822945   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:24.822953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:24.823016   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:24.855369   80857 cri.go:89] found id: ""
	I0717 18:43:24.855418   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.855455   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:24.855468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:24.855557   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:24.891080   80857 cri.go:89] found id: ""
	I0717 18:43:24.891110   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.891127   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:24.891133   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:24.891187   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:24.923679   80857 cri.go:89] found id: ""
	I0717 18:43:24.923812   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.923833   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:24.923847   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:24.923863   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:24.975469   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:24.975499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:24.988671   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:24.988702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 18:43:23.670616   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.171013   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:24.323858   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.324395   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:28.325125   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:24.596495   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.597134   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:29.096334   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	W0717 18:43:25.055191   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:25.055210   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:25.055223   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:25.138867   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:25.138900   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:27.678822   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:27.691422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:27.691483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:27.723979   80857 cri.go:89] found id: ""
	I0717 18:43:27.724008   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.724016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:27.724022   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:27.724067   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:27.756389   80857 cri.go:89] found id: ""
	I0717 18:43:27.756415   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.756423   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:27.756429   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:27.756476   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:27.787617   80857 cri.go:89] found id: ""
	I0717 18:43:27.787644   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.787652   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:27.787658   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:27.787705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:27.821688   80857 cri.go:89] found id: ""
	I0717 18:43:27.821716   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.821725   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:27.821732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:27.821787   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:27.855353   80857 cri.go:89] found id: ""
	I0717 18:43:27.855378   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.855386   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:27.855392   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:27.855439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:27.887885   80857 cri.go:89] found id: ""
	I0717 18:43:27.887909   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.887917   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:27.887923   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:27.887984   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:27.918797   80857 cri.go:89] found id: ""
	I0717 18:43:27.918820   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.918828   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:27.918833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:27.918884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:27.951255   80857 cri.go:89] found id: ""
	I0717 18:43:27.951283   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.951295   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:27.951306   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:27.951319   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:28.025476   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:28.025506   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:28.063994   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:28.064020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:28.117762   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:28.117805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:28.135688   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:28.135725   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:28.238770   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:28.172438   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.670703   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:32.674896   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.824443   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:33.324216   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:31.595533   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:33.597968   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.739930   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:30.754147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:30.754231   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:30.794454   80857 cri.go:89] found id: ""
	I0717 18:43:30.794479   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.794486   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:30.794491   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:30.794548   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:30.831643   80857 cri.go:89] found id: ""
	I0717 18:43:30.831666   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.831673   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:30.831678   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:30.831731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:30.863293   80857 cri.go:89] found id: ""
	I0717 18:43:30.863315   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.863323   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:30.863337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:30.863395   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:30.897830   80857 cri.go:89] found id: ""
	I0717 18:43:30.897859   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.897870   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:30.897877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:30.897929   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:30.933179   80857 cri.go:89] found id: ""
	I0717 18:43:30.933209   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.933220   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:30.933227   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:30.933289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:30.964730   80857 cri.go:89] found id: ""
	I0717 18:43:30.964759   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.964773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:30.964781   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:30.964825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:30.996330   80857 cri.go:89] found id: ""
	I0717 18:43:30.996353   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.996361   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:30.996367   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:30.996419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:31.028193   80857 cri.go:89] found id: ""
	I0717 18:43:31.028220   80857 logs.go:276] 0 containers: []
	W0717 18:43:31.028228   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:31.028237   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:31.028251   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:31.040465   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:31.040490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:31.108127   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:31.108150   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:31.108164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:31.187763   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:31.187797   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:31.224238   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:31.224266   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:33.776145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:33.790045   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:33.790108   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:33.823471   80857 cri.go:89] found id: ""
	I0717 18:43:33.823495   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.823505   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:33.823512   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:33.823568   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:33.860205   80857 cri.go:89] found id: ""
	I0717 18:43:33.860233   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.860243   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:33.860250   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:33.860298   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:33.895469   80857 cri.go:89] found id: ""
	I0717 18:43:33.895499   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.895509   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:33.895516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:33.895578   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:33.938483   80857 cri.go:89] found id: ""
	I0717 18:43:33.938517   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.938527   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:33.938534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:33.938596   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:33.973265   80857 cri.go:89] found id: ""
	I0717 18:43:33.973293   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.973303   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:33.973309   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:33.973382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:34.012669   80857 cri.go:89] found id: ""
	I0717 18:43:34.012696   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.012704   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:34.012710   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:34.012760   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:34.045522   80857 cri.go:89] found id: ""
	I0717 18:43:34.045547   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.045557   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:34.045564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:34.045636   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:34.082927   80857 cri.go:89] found id: ""
	I0717 18:43:34.082957   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.082968   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:34.082979   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:34.082993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:34.134133   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:34.134168   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:34.146814   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:34.146837   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:34.217050   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:34.217079   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:34.217094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:34.298572   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:34.298610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:35.169868   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:37.170083   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:35.324578   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:37.825006   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:36.096437   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:38.096991   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:36.838187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:36.850888   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:36.850948   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:36.883132   80857 cri.go:89] found id: ""
	I0717 18:43:36.883153   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.883160   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:36.883166   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:36.883209   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:36.918310   80857 cri.go:89] found id: ""
	I0717 18:43:36.918339   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.918348   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:36.918353   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:36.918411   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:36.949794   80857 cri.go:89] found id: ""
	I0717 18:43:36.949818   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.949825   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:36.949831   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:36.949889   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:36.980913   80857 cri.go:89] found id: ""
	I0717 18:43:36.980951   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.980962   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:36.980969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:36.981029   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:37.014295   80857 cri.go:89] found id: ""
	I0717 18:43:37.014322   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.014330   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:37.014336   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:37.014397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:37.048555   80857 cri.go:89] found id: ""
	I0717 18:43:37.048581   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.048589   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:37.048595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:37.048643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:37.080533   80857 cri.go:89] found id: ""
	I0717 18:43:37.080561   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.080571   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:37.080577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:37.080640   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:37.112919   80857 cri.go:89] found id: ""
	I0717 18:43:37.112952   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.112963   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:37.112973   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:37.112987   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:37.165012   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:37.165044   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:37.177860   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:37.177881   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:37.244776   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:37.244806   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:37.244824   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:37.322949   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:37.322976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:39.861056   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:39.884509   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:39.884592   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:39.931317   80857 cri.go:89] found id: ""
	I0717 18:43:39.931341   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.931348   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:39.931354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:39.931410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:39.971571   80857 cri.go:89] found id: ""
	I0717 18:43:39.971615   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.971626   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:39.971634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:39.971692   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:40.003851   80857 cri.go:89] found id: ""
	I0717 18:43:40.003875   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.003883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:40.003891   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:40.003942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:40.040403   80857 cri.go:89] found id: ""
	I0717 18:43:40.040430   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.040440   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:40.040445   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:40.040498   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:39.669960   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.170056   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.325792   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.824332   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.596935   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.597153   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.071893   80857 cri.go:89] found id: ""
	I0717 18:43:40.071919   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.071927   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:40.071932   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:40.071979   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:40.111020   80857 cri.go:89] found id: ""
	I0717 18:43:40.111042   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.111052   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:40.111059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:40.111117   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:40.142872   80857 cri.go:89] found id: ""
	I0717 18:43:40.142899   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.142910   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:40.142917   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:40.142975   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:40.179919   80857 cri.go:89] found id: ""
	I0717 18:43:40.179944   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.179953   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:40.179963   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:40.179980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:40.233033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:40.233075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:40.246272   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:40.246299   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:40.311988   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:40.312014   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:40.312033   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:40.395622   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:40.395658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:42.935843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:42.949893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:42.949957   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:42.982429   80857 cri.go:89] found id: ""
	I0717 18:43:42.982451   80857 logs.go:276] 0 containers: []
	W0717 18:43:42.982459   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:42.982464   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:42.982512   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:43.018637   80857 cri.go:89] found id: ""
	I0717 18:43:43.018659   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.018666   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:43.018672   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:43.018719   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:43.054274   80857 cri.go:89] found id: ""
	I0717 18:43:43.054301   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.054310   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:43.054317   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:43.054368   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:43.093382   80857 cri.go:89] found id: ""
	I0717 18:43:43.093408   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.093418   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:43.093425   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:43.093484   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:43.125830   80857 cri.go:89] found id: ""
	I0717 18:43:43.125862   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.125871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:43.125878   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:43.125936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:43.157110   80857 cri.go:89] found id: ""
	I0717 18:43:43.157138   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.157147   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:43.157154   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:43.157215   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:43.188320   80857 cri.go:89] found id: ""
	I0717 18:43:43.188342   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.188349   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:43.188354   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:43.188400   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:43.220650   80857 cri.go:89] found id: ""
	I0717 18:43:43.220679   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.220686   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:43.220695   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:43.220707   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:43.259320   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:43.259358   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:43.308308   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:43.308346   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:43.321865   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:43.321894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:43.396110   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:43.396135   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:43.396147   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:44.670206   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.169748   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.323427   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.324066   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.096564   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.105605   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.976091   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:45.988956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:45.989015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:46.022277   80857 cri.go:89] found id: ""
	I0717 18:43:46.022307   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.022318   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:46.022325   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:46.022398   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:46.057607   80857 cri.go:89] found id: ""
	I0717 18:43:46.057636   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.057646   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:46.057653   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:46.057712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:46.089275   80857 cri.go:89] found id: ""
	I0717 18:43:46.089304   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.089313   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:46.089321   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:46.089378   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:46.123686   80857 cri.go:89] found id: ""
	I0717 18:43:46.123717   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.123726   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:46.123731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:46.123784   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:46.166600   80857 cri.go:89] found id: ""
	I0717 18:43:46.166628   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.166638   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:46.166645   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:46.166704   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:46.202518   80857 cri.go:89] found id: ""
	I0717 18:43:46.202543   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.202562   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:46.202568   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:46.202612   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:46.234573   80857 cri.go:89] found id: ""
	I0717 18:43:46.234608   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.234620   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:46.234627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:46.234687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:46.265305   80857 cri.go:89] found id: ""
	I0717 18:43:46.265333   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.265343   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:46.265355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:46.265369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:46.342963   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:46.342993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:46.377170   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:46.377208   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:46.429641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:46.429673   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:46.442168   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:46.442195   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:46.516656   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.016877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:49.030308   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:49.030375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:49.062400   80857 cri.go:89] found id: ""
	I0717 18:43:49.062423   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.062430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:49.062435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:49.062486   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:49.097110   80857 cri.go:89] found id: ""
	I0717 18:43:49.097131   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.097137   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:49.097142   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:49.097190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:49.128535   80857 cri.go:89] found id: ""
	I0717 18:43:49.128558   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.128571   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:49.128577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:49.128626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:49.162505   80857 cri.go:89] found id: ""
	I0717 18:43:49.162530   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.162538   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:49.162544   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:49.162594   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:49.194912   80857 cri.go:89] found id: ""
	I0717 18:43:49.194939   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.194950   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:49.194957   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:49.195025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:49.227055   80857 cri.go:89] found id: ""
	I0717 18:43:49.227083   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.227092   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:49.227098   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:49.227147   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:49.259568   80857 cri.go:89] found id: ""
	I0717 18:43:49.259596   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.259607   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:49.259618   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:49.259673   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:49.291700   80857 cri.go:89] found id: ""
	I0717 18:43:49.291727   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.291735   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:49.291744   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:49.291755   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:49.344600   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:49.344636   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:49.357680   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:49.357705   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:49.427160   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.427180   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:49.427192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:49.504151   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:49.504182   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:49.170632   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.170953   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:49.324205   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.823181   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:53.824989   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:49.596298   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.596383   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:54.097260   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:52.041591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:52.054775   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:52.054841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:52.085858   80857 cri.go:89] found id: ""
	I0717 18:43:52.085892   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.085904   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:52.085911   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:52.085961   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:52.124100   80857 cri.go:89] found id: ""
	I0717 18:43:52.124122   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.124130   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:52.124135   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:52.124195   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:52.155056   80857 cri.go:89] found id: ""
	I0717 18:43:52.155079   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.155087   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:52.155093   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:52.155154   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:52.189318   80857 cri.go:89] found id: ""
	I0717 18:43:52.189349   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.189359   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:52.189366   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:52.189430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:52.222960   80857 cri.go:89] found id: ""
	I0717 18:43:52.222988   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.222999   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:52.223006   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:52.223071   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:52.255807   80857 cri.go:89] found id: ""
	I0717 18:43:52.255834   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.255841   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:52.255847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:52.255904   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:52.286596   80857 cri.go:89] found id: ""
	I0717 18:43:52.286628   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.286641   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:52.286648   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:52.286703   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:52.319607   80857 cri.go:89] found id: ""
	I0717 18:43:52.319632   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.319641   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:52.319652   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:52.319666   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:52.371270   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:52.371301   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:52.384771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:52.384803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:52.456408   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:52.456432   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:52.456444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:52.533724   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:52.533759   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:53.171080   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:55.669642   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:56.324311   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:58.823693   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:56.595916   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:58.597526   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:55.072554   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:55.087005   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:55.087086   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:55.123300   80857 cri.go:89] found id: ""
	I0717 18:43:55.123325   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.123331   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:55.123336   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:55.123390   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:55.158476   80857 cri.go:89] found id: ""
	I0717 18:43:55.158502   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.158509   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:55.158515   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:55.158572   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:55.198489   80857 cri.go:89] found id: ""
	I0717 18:43:55.198511   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.198518   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:55.198524   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:55.198567   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:55.230901   80857 cri.go:89] found id: ""
	I0717 18:43:55.230933   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.230943   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:55.230951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:55.231028   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:55.262303   80857 cri.go:89] found id: ""
	I0717 18:43:55.262326   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.262333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:55.262340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:55.262393   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:55.293889   80857 cri.go:89] found id: ""
	I0717 18:43:55.293916   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.293925   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:55.293930   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:55.293983   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:55.325695   80857 cri.go:89] found id: ""
	I0717 18:43:55.325720   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.325727   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:55.325737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:55.325797   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:55.360021   80857 cri.go:89] found id: ""
	I0717 18:43:55.360044   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.360052   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:55.360059   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:55.360075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:55.372088   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:55.372111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:55.442073   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:55.442101   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:55.442116   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:55.521733   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:55.521763   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:55.558914   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:55.558947   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.114001   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:58.126283   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:58.126353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:58.162769   80857 cri.go:89] found id: ""
	I0717 18:43:58.162800   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.162810   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:58.162815   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:58.162862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:58.197359   80857 cri.go:89] found id: ""
	I0717 18:43:58.197386   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.197397   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:58.197404   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:58.197465   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:58.229662   80857 cri.go:89] found id: ""
	I0717 18:43:58.229691   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.229700   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:58.229707   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:58.229766   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:58.261810   80857 cri.go:89] found id: ""
	I0717 18:43:58.261832   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.261838   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:58.261844   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:58.261900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:58.293243   80857 cri.go:89] found id: ""
	I0717 18:43:58.293271   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.293282   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:58.293290   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:58.293353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:58.325689   80857 cri.go:89] found id: ""
	I0717 18:43:58.325714   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.325724   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:58.325731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:58.325785   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:58.357381   80857 cri.go:89] found id: ""
	I0717 18:43:58.357406   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.357416   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:58.357422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:58.357483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:58.389859   80857 cri.go:89] found id: ""
	I0717 18:43:58.389888   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.389900   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:58.389910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:58.389926   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:58.458034   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:58.458058   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:58.458072   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:58.536134   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:58.536164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:58.573808   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:58.573834   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.624956   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:58.624985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:58.170810   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:00.670184   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:02.671370   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:00.824682   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:02.824874   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:01.096294   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:03.096348   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:01.138486   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:01.151547   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:01.151610   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:01.186397   80857 cri.go:89] found id: ""
	I0717 18:44:01.186422   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.186430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:01.186435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:01.186487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:01.220797   80857 cri.go:89] found id: ""
	I0717 18:44:01.220822   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.220830   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:01.220849   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:01.220894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:01.257640   80857 cri.go:89] found id: ""
	I0717 18:44:01.257666   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.257674   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:01.257680   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:01.257727   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:01.295393   80857 cri.go:89] found id: ""
	I0717 18:44:01.295418   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.295425   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:01.295432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:01.295493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:01.327242   80857 cri.go:89] found id: ""
	I0717 18:44:01.327261   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.327268   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:01.327273   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:01.327319   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:01.358559   80857 cri.go:89] found id: ""
	I0717 18:44:01.358586   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.358593   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:01.358599   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:01.358647   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:01.392301   80857 cri.go:89] found id: ""
	I0717 18:44:01.392332   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.392341   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:01.392346   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:01.392407   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:01.424422   80857 cri.go:89] found id: ""
	I0717 18:44:01.424449   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.424457   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:01.424465   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:01.424477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:01.473298   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:01.473332   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:01.487444   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:01.487471   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:01.552548   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:01.552572   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:01.552586   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:01.634203   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:01.634242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:04.175618   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:04.188071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:04.188150   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:04.222149   80857 cri.go:89] found id: ""
	I0717 18:44:04.222173   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.222180   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:04.222185   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:04.222242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:04.257174   80857 cri.go:89] found id: ""
	I0717 18:44:04.257211   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.257223   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:04.257232   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:04.257284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:04.291628   80857 cri.go:89] found id: ""
	I0717 18:44:04.291653   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.291666   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:04.291673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:04.291733   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:04.325935   80857 cri.go:89] found id: ""
	I0717 18:44:04.325964   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.325975   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:04.325982   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:04.326043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:04.356610   80857 cri.go:89] found id: ""
	I0717 18:44:04.356638   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.356648   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:04.356655   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:04.356712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:04.387728   80857 cri.go:89] found id: ""
	I0717 18:44:04.387764   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.387773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:04.387782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:04.387840   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:04.421452   80857 cri.go:89] found id: ""
	I0717 18:44:04.421479   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.421488   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:04.421495   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:04.421555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:04.453111   80857 cri.go:89] found id: ""
	I0717 18:44:04.453139   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.453150   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:04.453161   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:04.453175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:04.506185   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:04.506215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:04.523611   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:04.523638   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:04.591051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:04.591074   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:04.591091   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:04.666603   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:04.666647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:05.169836   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.170112   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:05.324886   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.325488   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:05.096545   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.598131   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.205208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:07.218182   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:07.218236   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:07.254521   80857 cri.go:89] found id: ""
	I0717 18:44:07.254554   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.254565   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:07.254571   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:07.254638   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:07.293622   80857 cri.go:89] found id: ""
	I0717 18:44:07.293650   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.293658   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:07.293663   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:07.293711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:07.331056   80857 cri.go:89] found id: ""
	I0717 18:44:07.331083   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.331091   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:07.331097   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:07.331157   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:07.368445   80857 cri.go:89] found id: ""
	I0717 18:44:07.368476   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.368484   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:07.368491   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:07.368541   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:07.405507   80857 cri.go:89] found id: ""
	I0717 18:44:07.405539   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.405550   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:07.405557   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:07.405617   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:07.444752   80857 cri.go:89] found id: ""
	I0717 18:44:07.444782   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.444792   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:07.444801   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:07.444859   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:07.486976   80857 cri.go:89] found id: ""
	I0717 18:44:07.487006   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.487016   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:07.487024   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:07.487073   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:07.522561   80857 cri.go:89] found id: ""
	I0717 18:44:07.522590   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.522599   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:07.522607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:07.522618   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:07.576350   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:07.576382   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:07.591491   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:07.591517   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:07.659860   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:07.659886   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:07.659902   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:07.743445   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:07.743478   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:09.170601   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:11.170851   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:09.824120   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:11.826838   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:10.097009   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:12.596778   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:10.284468   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:10.296549   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:10.296608   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:10.331209   80857 cri.go:89] found id: ""
	I0717 18:44:10.331236   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.331246   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:10.331252   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:10.331297   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:10.363911   80857 cri.go:89] found id: ""
	I0717 18:44:10.363941   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.363949   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:10.363954   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:10.364001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:10.395935   80857 cri.go:89] found id: ""
	I0717 18:44:10.395960   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.395970   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:10.395977   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:10.396021   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:10.428307   80857 cri.go:89] found id: ""
	I0717 18:44:10.428337   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.428344   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:10.428351   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:10.428397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:10.459615   80857 cri.go:89] found id: ""
	I0717 18:44:10.459643   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.459654   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:10.459661   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:10.459715   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:10.491593   80857 cri.go:89] found id: ""
	I0717 18:44:10.491617   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.491628   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:10.491636   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:10.491693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:10.526822   80857 cri.go:89] found id: ""
	I0717 18:44:10.526846   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.526853   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:10.526858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:10.526918   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:10.561037   80857 cri.go:89] found id: ""
	I0717 18:44:10.561066   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.561077   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:10.561087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:10.561101   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:10.643333   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:10.643364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:10.684673   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:10.684704   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:10.736191   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:10.736220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:10.748762   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:10.748793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:10.812121   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.313033   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:13.325692   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:13.325756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:13.358306   80857 cri.go:89] found id: ""
	I0717 18:44:13.358336   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.358345   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:13.358352   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:13.358410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:13.393233   80857 cri.go:89] found id: ""
	I0717 18:44:13.393264   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.393274   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:13.393282   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:13.393340   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:13.424256   80857 cri.go:89] found id: ""
	I0717 18:44:13.424287   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.424298   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:13.424305   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:13.424358   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:13.454988   80857 cri.go:89] found id: ""
	I0717 18:44:13.455010   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.455018   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:13.455023   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:13.455069   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:13.491019   80857 cri.go:89] found id: ""
	I0717 18:44:13.491046   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.491054   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:13.491060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:13.491107   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:13.523045   80857 cri.go:89] found id: ""
	I0717 18:44:13.523070   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.523079   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:13.523085   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:13.523131   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:13.555442   80857 cri.go:89] found id: ""
	I0717 18:44:13.555470   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.555483   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:13.555489   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:13.555549   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:13.588891   80857 cri.go:89] found id: ""
	I0717 18:44:13.588921   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.588931   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:13.588958   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:13.588973   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:13.663635   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.663659   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:13.663674   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:13.749098   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:13.749135   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:13.785489   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:13.785524   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:13.837098   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:13.837128   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:13.671215   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:15.671282   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:17.671466   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:14.324573   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:16.826063   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:15.095967   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:17.096403   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:19.096478   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:16.350571   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:16.364398   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:16.364470   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:16.400677   80857 cri.go:89] found id: ""
	I0717 18:44:16.400708   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.400719   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:16.400726   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:16.400781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:16.431715   80857 cri.go:89] found id: ""
	I0717 18:44:16.431743   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.431754   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:16.431760   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:16.431836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:16.465115   80857 cri.go:89] found id: ""
	I0717 18:44:16.465148   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.465160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:16.465167   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:16.465230   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:16.497906   80857 cri.go:89] found id: ""
	I0717 18:44:16.497933   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.497944   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:16.497952   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:16.498008   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:16.534066   80857 cri.go:89] found id: ""
	I0717 18:44:16.534097   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.534108   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:16.534116   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:16.534173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:16.566679   80857 cri.go:89] found id: ""
	I0717 18:44:16.566706   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.566717   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:16.566724   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:16.566781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:16.598397   80857 cri.go:89] found id: ""
	I0717 18:44:16.598416   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.598422   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:16.598427   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:16.598480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:16.629943   80857 cri.go:89] found id: ""
	I0717 18:44:16.629975   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.629998   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:16.630017   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:16.630032   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:16.706452   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:16.706489   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:16.744971   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:16.745003   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:16.796450   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:16.796477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:16.809192   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:16.809217   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:16.875699   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.376821   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:19.389921   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:19.389980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:19.423837   80857 cri.go:89] found id: ""
	I0717 18:44:19.423862   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.423870   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:19.423877   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:19.423934   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:19.468267   80857 cri.go:89] found id: ""
	I0717 18:44:19.468293   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.468305   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:19.468311   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:19.468371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:19.503286   80857 cri.go:89] found id: ""
	I0717 18:44:19.503315   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.503326   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:19.503333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:19.503391   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:19.535505   80857 cri.go:89] found id: ""
	I0717 18:44:19.535531   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.535542   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:19.535548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:19.535607   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:19.568678   80857 cri.go:89] found id: ""
	I0717 18:44:19.568704   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.568711   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:19.568717   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:19.568762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:19.604027   80857 cri.go:89] found id: ""
	I0717 18:44:19.604053   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.604064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:19.604071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:19.604127   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:19.637357   80857 cri.go:89] found id: ""
	I0717 18:44:19.637387   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.637397   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:19.637403   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:19.637450   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:19.669094   80857 cri.go:89] found id: ""
	I0717 18:44:19.669126   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.669136   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:19.669145   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:19.669160   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:19.720218   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:19.720248   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:19.733320   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:19.733343   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:19.796229   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.796252   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:19.796267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:19.871157   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:19.871186   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:20.170824   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:22.670239   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:19.324037   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:21.324408   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:23.824030   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:21.098734   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:23.595859   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:22.409012   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:22.421477   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:22.421546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:22.457314   80857 cri.go:89] found id: ""
	I0717 18:44:22.457337   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.457346   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:22.457354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:22.457410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:22.490998   80857 cri.go:89] found id: ""
	I0717 18:44:22.491022   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.491030   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:22.491037   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:22.491090   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:22.523904   80857 cri.go:89] found id: ""
	I0717 18:44:22.523934   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.523945   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:22.523953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:22.524012   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:22.555917   80857 cri.go:89] found id: ""
	I0717 18:44:22.555947   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.555956   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:22.555962   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:22.556026   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:22.588510   80857 cri.go:89] found id: ""
	I0717 18:44:22.588552   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.588565   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:22.588574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:22.588652   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:22.621854   80857 cri.go:89] found id: ""
	I0717 18:44:22.621883   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.621893   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:22.621901   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:22.621956   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:22.653897   80857 cri.go:89] found id: ""
	I0717 18:44:22.653921   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.653931   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:22.653938   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:22.654001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:22.685731   80857 cri.go:89] found id: ""
	I0717 18:44:22.685760   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.685770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:22.685779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:22.685792   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:22.735514   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:22.735545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:22.748148   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:22.748169   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:22.809637   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:22.809666   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:22.809682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:22.886014   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:22.886050   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:24.670825   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:27.169930   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.824694   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:28.324620   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.597423   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:28.095788   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.431906   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:25.444866   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:25.444965   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:25.477211   80857 cri.go:89] found id: ""
	I0717 18:44:25.477245   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.477257   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:25.477264   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:25.477366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:25.512077   80857 cri.go:89] found id: ""
	I0717 18:44:25.512108   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.512120   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:25.512127   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:25.512177   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:25.543953   80857 cri.go:89] found id: ""
	I0717 18:44:25.543974   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.543981   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:25.543987   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:25.544032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:25.574955   80857 cri.go:89] found id: ""
	I0717 18:44:25.574980   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.574990   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:25.574997   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:25.575054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:25.607078   80857 cri.go:89] found id: ""
	I0717 18:44:25.607106   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.607117   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:25.607125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:25.607188   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:25.643129   80857 cri.go:89] found id: ""
	I0717 18:44:25.643152   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.643162   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:25.643169   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:25.643225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:25.678220   80857 cri.go:89] found id: ""
	I0717 18:44:25.678241   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.678249   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:25.678254   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:25.678309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:25.715405   80857 cri.go:89] found id: ""
	I0717 18:44:25.715433   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.715446   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:25.715458   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:25.715474   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:25.772978   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:25.773008   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:25.786559   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:25.786587   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:25.853369   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:25.853386   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:25.853398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:25.954346   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:25.954398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:28.498591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:28.511701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:28.511762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:28.543527   80857 cri.go:89] found id: ""
	I0717 18:44:28.543551   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.543559   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:28.543565   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:28.543624   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:28.574737   80857 cri.go:89] found id: ""
	I0717 18:44:28.574762   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.574769   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:28.574776   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:28.574835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:28.608129   80857 cri.go:89] found id: ""
	I0717 18:44:28.608166   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.608174   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:28.608179   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:28.608234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:28.644324   80857 cri.go:89] found id: ""
	I0717 18:44:28.644348   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.644357   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:28.644371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:28.644426   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:28.675830   80857 cri.go:89] found id: ""
	I0717 18:44:28.675859   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.675870   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:28.675877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:28.675937   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:28.705713   80857 cri.go:89] found id: ""
	I0717 18:44:28.705749   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.705760   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:28.705768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:28.705821   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:28.738648   80857 cri.go:89] found id: ""
	I0717 18:44:28.738677   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.738688   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:28.738695   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:28.738752   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:28.768877   80857 cri.go:89] found id: ""
	I0717 18:44:28.768906   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.768916   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:28.768927   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:28.768953   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:28.818951   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:28.818985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:28.832813   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:28.832843   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:28.910030   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:28.910051   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:28.910063   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:28.986706   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:28.986743   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:29.170559   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:31.669543   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:30.824906   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:33.324261   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:30.096916   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:32.597522   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:31.529154   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:31.543261   80857 kubeadm.go:597] duration metric: took 4m4.346231712s to restartPrimaryControlPlane
	W0717 18:44:31.543327   80857 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:44:31.543350   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:44:33.670602   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:36.169669   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:35.325082   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:37.824371   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:35.096445   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:37.097375   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:39.098005   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:36.752008   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.208633612s)
	I0717 18:44:36.752076   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:44:36.765411   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:44:36.774556   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:44:36.783406   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:44:36.783427   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:44:36.783479   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:44:36.791953   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:44:36.792007   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:44:36.800929   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:44:36.808988   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:44:36.809049   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:44:36.817312   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.825586   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:44:36.825648   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.834783   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:44:36.843109   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:44:36.843166   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:44:36.852276   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:44:37.058251   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:44:38.170695   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.671193   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.324181   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.818959   80401 pod_ready.go:81] duration metric: took 4m0.000961975s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" ...
	E0717 18:44:40.818998   80401 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:44:40.819017   80401 pod_ready.go:38] duration metric: took 4m12.045669741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:44:40.819042   80401 kubeadm.go:597] duration metric: took 4m22.276381575s to restartPrimaryControlPlane
	W0717 18:44:40.819091   80401 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:44:40.819116   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:44:41.597013   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:44.097096   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:43.170145   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:45.670626   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:46.595570   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:48.598459   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:48.169822   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:50.170686   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:52.670255   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:51.097591   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:53.597467   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:55.170853   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:57.670157   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:56.096506   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:58.107493   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:00.170210   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:02.672286   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:00.596747   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:02.590517   81068 pod_ready.go:81] duration metric: took 4m0.000120095s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" ...
	E0717 18:45:02.590549   81068 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:45:02.590572   81068 pod_ready.go:38] duration metric: took 4m10.536894511s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:02.590607   81068 kubeadm.go:597] duration metric: took 4m18.045314131s to restartPrimaryControlPlane
	W0717 18:45:02.590672   81068 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:45:02.590702   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:45:06.920900   80401 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.10175503s)
	I0717 18:45:06.921009   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:06.952090   80401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:06.962820   80401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:06.979545   80401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:06.979577   80401 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:06.979641   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:45:06.990493   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:06.990574   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:07.014934   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:45:07.024381   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:07.024449   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:07.033573   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:45:07.042495   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:07.042552   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:07.051233   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:45:07.059616   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:07.059674   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:07.068348   80401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:07.112042   80401 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 18:45:07.112188   80401 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:45:07.229262   80401 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:45:07.229356   80401 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:45:07.229491   80401 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 18:45:07.239251   80401 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:45:05.171753   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:07.669753   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:07.241949   80401 out.go:204]   - Generating certificates and keys ...
	I0717 18:45:07.242054   80401 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:45:07.242150   80401 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:45:07.242253   80401 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:45:07.242355   80401 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:45:07.242459   80401 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:45:07.242536   80401 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:45:07.242620   80401 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:45:07.242721   80401 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:45:07.242835   80401 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:45:07.242937   80401 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:45:07.242998   80401 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:45:07.243068   80401 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:45:07.641462   80401 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:45:07.705768   80401 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:45:07.821102   80401 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:45:07.898702   80401 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:45:08.107470   80401 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:45:08.107945   80401 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:45:08.111615   80401 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:45:08.113464   80401 out.go:204]   - Booting up control plane ...
	I0717 18:45:08.113572   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:45:08.113695   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:45:08.113843   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:45:08.131411   80401 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:45:08.137563   80401 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:45:08.137622   80401 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:45:08.268403   80401 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:45:08.268519   80401 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:45:08.769158   80401 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.386396ms
	I0717 18:45:08.769265   80401 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:45:09.669968   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:11.670466   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:13.771873   80401 kubeadm.go:310] [api-check] The API server is healthy after 5.002458706s
	I0717 18:45:13.789581   80401 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:45:13.804268   80401 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:45:13.831438   80401 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:45:13.831641   80401 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-066175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:45:13.845165   80401 kubeadm.go:310] [bootstrap-token] Using token: fscs12.0o2n9pl0vxdw75m1
	I0717 18:45:13.846851   80401 out.go:204]   - Configuring RBAC rules ...
	I0717 18:45:13.847002   80401 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:45:13.854788   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:45:13.866828   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:45:13.871541   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:45:13.875508   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:45:13.880068   80401 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:45:14.179824   80401 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:45:14.669946   80401 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:45:15.180053   80401 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:45:15.180076   80401 kubeadm.go:310] 
	I0717 18:45:15.180180   80401 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:45:15.180201   80401 kubeadm.go:310] 
	I0717 18:45:15.180287   80401 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:45:15.180295   80401 kubeadm.go:310] 
	I0717 18:45:15.180348   80401 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:45:15.180437   80401 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:45:15.180517   80401 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:45:15.180530   80401 kubeadm.go:310] 
	I0717 18:45:15.180607   80401 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:45:15.180617   80401 kubeadm.go:310] 
	I0717 18:45:15.180682   80401 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:45:15.180692   80401 kubeadm.go:310] 
	I0717 18:45:15.180775   80401 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:45:15.180871   80401 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:45:15.180984   80401 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:45:15.180996   80401 kubeadm.go:310] 
	I0717 18:45:15.181107   80401 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:45:15.181221   80401 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:45:15.181234   80401 kubeadm.go:310] 
	I0717 18:45:15.181370   80401 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fscs12.0o2n9pl0vxdw75m1 \
	I0717 18:45:15.181523   80401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:45:15.181571   80401 kubeadm.go:310] 	--control-plane 
	I0717 18:45:15.181579   80401 kubeadm.go:310] 
	I0717 18:45:15.181679   80401 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:45:15.181690   80401 kubeadm.go:310] 
	I0717 18:45:15.181802   80401 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fscs12.0o2n9pl0vxdw75m1 \
	I0717 18:45:15.181954   80401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:45:15.182460   80401 kubeadm.go:310] W0717 18:45:07.084606    2905 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:45:15.182848   80401 kubeadm.go:310] W0717 18:45:07.085710    2905 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:45:15.183017   80401 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:15.183038   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:45:15.183048   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:45:15.185022   80401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:45:13.671267   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:15.671682   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:15.186444   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:45:15.197514   80401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:45:15.216000   80401 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:45:15.216097   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:15.216157   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-066175 minikube.k8s.io/updated_at=2024_07_17T18_45_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=no-preload-066175 minikube.k8s.io/primary=true
	I0717 18:45:15.251049   80401 ops.go:34] apiserver oom_adj: -16
	I0717 18:45:15.383234   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:15.884265   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:16.384075   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:16.883375   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:17.383864   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:17.884072   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:18.383283   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:18.883644   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:19.384366   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:19.507413   80401 kubeadm.go:1113] duration metric: took 4.291369352s to wait for elevateKubeSystemPrivileges
	I0717 18:45:19.507450   80401 kubeadm.go:394] duration metric: took 5m1.019320853s to StartCluster
	I0717 18:45:19.507473   80401 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:19.507570   80401 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:45:19.510004   80401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:19.510329   80401 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:45:19.510401   80401 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:45:19.510484   80401 addons.go:69] Setting storage-provisioner=true in profile "no-preload-066175"
	I0717 18:45:19.510515   80401 addons.go:234] Setting addon storage-provisioner=true in "no-preload-066175"
	W0717 18:45:19.510523   80401 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:45:19.510530   80401 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:45:19.510531   80401 addons.go:69] Setting default-storageclass=true in profile "no-preload-066175"
	I0717 18:45:19.510553   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.510551   80401 addons.go:69] Setting metrics-server=true in profile "no-preload-066175"
	I0717 18:45:19.510572   80401 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-066175"
	I0717 18:45:19.510586   80401 addons.go:234] Setting addon metrics-server=true in "no-preload-066175"
	W0717 18:45:19.510596   80401 addons.go:243] addon metrics-server should already be in state true
	I0717 18:45:19.510628   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.510986   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.510986   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.511027   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.511047   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.511075   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.511102   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.512057   80401 out.go:177] * Verifying Kubernetes components...
	I0717 18:45:19.513662   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:45:19.532038   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40719
	I0717 18:45:19.532059   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45825
	I0717 18:45:19.532048   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0717 18:45:19.532557   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.532562   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.532701   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.533086   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533107   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533246   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533261   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533276   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533295   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533455   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533671   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533732   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533851   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.533933   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.533958   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.534280   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.534310   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.537749   80401 addons.go:234] Setting addon default-storageclass=true in "no-preload-066175"
	W0717 18:45:19.537773   80401 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:45:19.537804   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.538168   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.538206   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.550488   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I0717 18:45:19.551013   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.551625   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.551647   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.552005   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.552335   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.553613   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I0717 18:45:19.553633   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0717 18:45:19.554184   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.554243   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.554271   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.554784   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.554801   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.554965   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.554986   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.555220   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.555350   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.555393   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.555995   80401 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:45:19.556103   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.556229   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.556825   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.557482   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:45:19.557499   80401 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:45:19.557517   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.558437   80401 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:45:19.560069   80401 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:19.560084   80401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:45:19.560100   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.560881   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.560908   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.560932   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.561265   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.561477   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.561633   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.561732   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.563601   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.564025   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.564197   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.564219   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.564378   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.564549   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.564686   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.579324   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37271
	I0717 18:45:19.579786   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.580331   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.580354   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.580697   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.580925   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.582700   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.582910   80401 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:19.582923   80401 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:45:19.582936   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.585938   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.586387   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.586414   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.586605   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.586758   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.586920   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.587061   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.706369   80401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:45:19.727936   80401 node_ready.go:35] waiting up to 6m0s for node "no-preload-066175" to be "Ready" ...
	I0717 18:45:19.738822   80401 node_ready.go:49] node "no-preload-066175" has status "Ready":"True"
	I0717 18:45:19.738841   80401 node_ready.go:38] duration metric: took 10.872501ms for node "no-preload-066175" to be "Ready" ...
	I0717 18:45:19.738852   80401 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:19.744979   80401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:19.854180   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:19.873723   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:45:19.873746   80401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:45:19.883867   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:19.902041   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:45:19.902064   80401 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:45:19.926788   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:19.926867   80401 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:45:19.953788   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:20.571091   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.571119   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571119   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.571137   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571394   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.571439   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.571456   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.571463   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.571459   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.572575   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571494   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.572789   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.572761   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.572804   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.572815   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.572824   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.573027   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.573044   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.589595   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.589614   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.589913   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.589940   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.589918   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.789754   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.789776   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.790082   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.790103   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.790113   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.790123   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.790416   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.790457   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.790470   80401 addons.go:475] Verifying addon metrics-server=true in "no-preload-066175"
	I0717 18:45:20.790416   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.792175   80401 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 18:45:18.169876   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:20.170261   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:22.664656   80180 pod_ready.go:81] duration metric: took 4m0.000669682s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" ...
	E0717 18:45:22.664696   80180 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:45:22.664716   80180 pod_ready.go:38] duration metric: took 4m9.027997903s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:22.664746   80180 kubeadm.go:597] duration metric: took 4m19.955287366s to restartPrimaryControlPlane
	W0717 18:45:22.664823   80180 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:45:22.664854   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:45:20.793543   80401 addons.go:510] duration metric: took 1.283145408s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 18:45:21.766367   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:24.252243   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:24.771415   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:24.771443   80401 pod_ready.go:81] duration metric: took 5.026437249s for pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:24.771457   80401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:26.777371   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:28.778629   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:31.277550   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:31.792126   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.792154   80401 pod_ready.go:81] duration metric: took 7.020687724s for pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.792168   80401 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.798687   80401 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.798708   80401 pod_ready.go:81] duration metric: took 6.534344ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.798717   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.803428   80401 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.803452   80401 pod_ready.go:81] duration metric: took 4.727536ms for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.803464   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.815053   80401 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.815078   80401 pod_ready.go:81] duration metric: took 11.60679ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.815092   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rgp5c" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.824126   80401 pod_ready.go:92] pod "kube-proxy-rgp5c" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.824151   80401 pod_ready.go:81] duration metric: took 9.050394ms for pod "kube-proxy-rgp5c" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.824163   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:32.176378   80401 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:32.176404   80401 pod_ready.go:81] duration metric: took 352.232802ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:32.176414   80401 pod_ready.go:38] duration metric: took 12.437548785s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:32.176430   80401 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:45:32.176492   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:45:32.190918   80401 api_server.go:72] duration metric: took 12.680546008s to wait for apiserver process to appear ...
	I0717 18:45:32.190942   80401 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:45:32.190963   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:45:32.196011   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:45:32.197004   80401 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:45:32.197024   80401 api_server.go:131] duration metric: took 6.075734ms to wait for apiserver health ...
	I0717 18:45:32.197033   80401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:45:32.379383   80401 system_pods.go:59] 9 kube-system pods found
	I0717 18:45:32.379412   80401 system_pods.go:61] "coredns-5cfdc65f69-r9xns" [29624b73-848d-4a35-96bc-92f9627842fe] Running
	I0717 18:45:32.379416   80401 system_pods.go:61] "coredns-5cfdc65f69-tx7nc" [085ec394-1ca7-4b9b-9b54-b4fdab45bd75] Running
	I0717 18:45:32.379420   80401 system_pods.go:61] "etcd-no-preload-066175" [6086cbd0-137f-428e-8131-4d57b8823912] Running
	I0717 18:45:32.379423   80401 system_pods.go:61] "kube-apiserver-no-preload-066175" [c1913fea-3c1b-4563-ac80-ee1224b23a35] Running
	I0717 18:45:32.379427   80401 system_pods.go:61] "kube-controller-manager-no-preload-066175" [f6dd2ea0-be8f-4c8c-89b0-57fed0d618fd] Running
	I0717 18:45:32.379431   80401 system_pods.go:61] "kube-proxy-rgp5c" [7aaedb8f-b248-43ac-bd49-4f97d26aa1f6] Running
	I0717 18:45:32.379433   80401 system_pods.go:61] "kube-scheduler-no-preload-066175" [406fae53-d382-42c0-90db-ff9c57ccda8b] Running
	I0717 18:45:32.379439   80401 system_pods.go:61] "metrics-server-78fcd8795b-kj29z" [4b99bc9f-b5a7-4e86-b3ba-2607f9840957] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:32.379442   80401 system_pods.go:61] "storage-provisioner" [c9730cf9-c0f1-4afc-94cc-cbd825158d7c] Running
	I0717 18:45:32.379450   80401 system_pods.go:74] duration metric: took 182.412193ms to wait for pod list to return data ...
	I0717 18:45:32.379456   80401 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:45:32.576324   80401 default_sa.go:45] found service account: "default"
	I0717 18:45:32.576348   80401 default_sa.go:55] duration metric: took 196.886306ms for default service account to be created ...
	I0717 18:45:32.576357   80401 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:45:32.780237   80401 system_pods.go:86] 9 kube-system pods found
	I0717 18:45:32.780266   80401 system_pods.go:89] "coredns-5cfdc65f69-r9xns" [29624b73-848d-4a35-96bc-92f9627842fe] Running
	I0717 18:45:32.780272   80401 system_pods.go:89] "coredns-5cfdc65f69-tx7nc" [085ec394-1ca7-4b9b-9b54-b4fdab45bd75] Running
	I0717 18:45:32.780276   80401 system_pods.go:89] "etcd-no-preload-066175" [6086cbd0-137f-428e-8131-4d57b8823912] Running
	I0717 18:45:32.780280   80401 system_pods.go:89] "kube-apiserver-no-preload-066175" [c1913fea-3c1b-4563-ac80-ee1224b23a35] Running
	I0717 18:45:32.780284   80401 system_pods.go:89] "kube-controller-manager-no-preload-066175" [f6dd2ea0-be8f-4c8c-89b0-57fed0d618fd] Running
	I0717 18:45:32.780288   80401 system_pods.go:89] "kube-proxy-rgp5c" [7aaedb8f-b248-43ac-bd49-4f97d26aa1f6] Running
	I0717 18:45:32.780291   80401 system_pods.go:89] "kube-scheduler-no-preload-066175" [406fae53-d382-42c0-90db-ff9c57ccda8b] Running
	I0717 18:45:32.780298   80401 system_pods.go:89] "metrics-server-78fcd8795b-kj29z" [4b99bc9f-b5a7-4e86-b3ba-2607f9840957] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:32.780302   80401 system_pods.go:89] "storage-provisioner" [c9730cf9-c0f1-4afc-94cc-cbd825158d7c] Running
	I0717 18:45:32.780314   80401 system_pods.go:126] duration metric: took 203.948509ms to wait for k8s-apps to be running ...
	I0717 18:45:32.780323   80401 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:45:32.780368   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:32.796763   80401 system_svc.go:56] duration metric: took 16.430293ms WaitForService to wait for kubelet
	I0717 18:45:32.796791   80401 kubeadm.go:582] duration metric: took 13.286425468s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:45:32.796809   80401 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:45:32.977271   80401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:45:32.977295   80401 node_conditions.go:123] node cpu capacity is 2
	I0717 18:45:32.977305   80401 node_conditions.go:105] duration metric: took 180.491938ms to run NodePressure ...
	I0717 18:45:32.977315   80401 start.go:241] waiting for startup goroutines ...
	I0717 18:45:32.977322   80401 start.go:246] waiting for cluster config update ...
	I0717 18:45:32.977331   80401 start.go:255] writing updated cluster config ...
	I0717 18:45:32.977544   80401 ssh_runner.go:195] Run: rm -f paused
	I0717 18:45:33.022678   80401 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 18:45:33.024737   80401 out.go:177] * Done! kubectl is now configured to use "no-preload-066175" cluster and "default" namespace by default
	I0717 18:45:33.625503   81068 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.034773328s)
	I0717 18:45:33.625584   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:33.640151   81068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:33.650198   81068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:33.659027   81068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:33.659048   81068 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:33.659088   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 18:45:33.667607   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:33.667663   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:33.677632   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 18:45:33.685631   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:33.685683   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:33.694068   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 18:45:33.702840   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:33.702894   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:33.711560   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 18:45:33.719883   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:33.719928   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:33.729898   81068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:33.781672   81068 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:45:33.781776   81068 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:45:33.908046   81068 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:45:33.908199   81068 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:45:33.908366   81068 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:45:34.103926   81068 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:45:34.105872   81068 out.go:204]   - Generating certificates and keys ...
	I0717 18:45:34.105979   81068 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:45:34.106063   81068 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:45:34.106183   81068 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:45:34.106425   81068 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:45:34.106542   81068 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:45:34.106624   81068 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:45:34.106729   81068 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:45:34.106827   81068 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:45:34.106901   81068 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:45:34.106984   81068 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:45:34.107046   81068 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:45:34.107142   81068 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:45:34.390326   81068 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:45:34.442610   81068 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:45:34.692719   81068 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:45:34.777644   81068 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:45:35.101349   81068 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:45:35.102039   81068 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:45:35.104892   81068 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:45:35.106561   81068 out.go:204]   - Booting up control plane ...
	I0717 18:45:35.106689   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:45:35.106775   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:45:35.107611   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:45:35.126132   81068 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:45:35.127180   81068 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:45:35.127245   81068 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:45:35.250173   81068 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:45:35.250284   81068 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:45:35.752731   81068 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.583425ms
	I0717 18:45:35.752861   81068 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:45:40.754304   81068 kubeadm.go:310] [api-check] The API server is healthy after 5.001385597s
	I0717 18:45:40.766072   81068 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:45:40.785708   81068 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:45:40.816360   81068 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:45:40.816576   81068 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-022930 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:45:40.830588   81068 kubeadm.go:310] [bootstrap-token] Using token: kxmxsp.4wnt2q9oqhdfdirj
	I0717 18:45:40.831905   81068 out.go:204]   - Configuring RBAC rules ...
	I0717 18:45:40.832031   81068 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:45:40.840754   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:45:40.850104   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:45:40.853748   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:45:40.857341   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:45:40.860783   81068 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:45:41.161978   81068 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:45:41.600410   81068 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:45:42.161763   81068 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:45:42.163450   81068 kubeadm.go:310] 
	I0717 18:45:42.163541   81068 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:45:42.163558   81068 kubeadm.go:310] 
	I0717 18:45:42.163661   81068 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:45:42.163673   81068 kubeadm.go:310] 
	I0717 18:45:42.163707   81068 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:45:42.163797   81068 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:45:42.163870   81068 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:45:42.163881   81068 kubeadm.go:310] 
	I0717 18:45:42.163974   81068 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:45:42.163990   81068 kubeadm.go:310] 
	I0717 18:45:42.164058   81068 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:45:42.164077   81068 kubeadm.go:310] 
	I0717 18:45:42.164151   81068 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:45:42.164256   81068 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:45:42.164367   81068 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:45:42.164377   81068 kubeadm.go:310] 
	I0717 18:45:42.164489   81068 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:45:42.164588   81068 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:45:42.164595   81068 kubeadm.go:310] 
	I0717 18:45:42.164683   81068 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token kxmxsp.4wnt2q9oqhdfdirj \
	I0717 18:45:42.164826   81068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:45:42.164862   81068 kubeadm.go:310] 	--control-plane 
	I0717 18:45:42.164870   81068 kubeadm.go:310] 
	I0717 18:45:42.165002   81068 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:45:42.165012   81068 kubeadm.go:310] 
	I0717 18:45:42.165143   81068 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token kxmxsp.4wnt2q9oqhdfdirj \
	I0717 18:45:42.165257   81068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:45:42.166381   81068 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:42.166436   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:45:42.166456   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:45:42.168387   81068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:45:42.169678   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:45:42.180065   81068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:45:42.197116   81068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:45:42.197192   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:42.197217   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-022930 minikube.k8s.io/updated_at=2024_07_17T18_45_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=default-k8s-diff-port-022930 minikube.k8s.io/primary=true
	I0717 18:45:42.216456   81068 ops.go:34] apiserver oom_adj: -16
	I0717 18:45:42.370148   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:42.870732   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:43.370980   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:43.871201   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:44.370616   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:44.871007   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:45.370377   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:45.870614   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:46.370555   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:46.870513   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:47.370594   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:47.870651   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:48.370620   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:48.870863   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:49.371058   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:49.870188   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:50.370949   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:50.871187   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:51.370764   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:51.871007   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:52.370298   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:52.870917   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:53.371193   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:53.870491   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:54.370274   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:54.871160   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.370879   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.870592   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.948131   81068 kubeadm.go:1113] duration metric: took 13.751000929s to wait for elevateKubeSystemPrivileges
	I0717 18:45:55.948166   81068 kubeadm.go:394] duration metric: took 5m11.453950834s to StartCluster
	I0717 18:45:55.948188   81068 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:55.948265   81068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:45:55.950777   81068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:55.951066   81068 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:45:55.951134   81068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:45:55.951202   81068 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951237   81068 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.951247   81068 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:45:55.951243   81068 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951257   81068 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951293   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:45:55.951300   81068 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.951318   81068 addons.go:243] addon metrics-server should already be in state true
	I0717 18:45:55.951319   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.951348   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.951292   81068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-022930"
	I0717 18:45:55.951712   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951732   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951744   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951754   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.951769   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.951747   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.952885   81068 out.go:177] * Verifying Kubernetes components...
	I0717 18:45:55.954423   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:45:55.968158   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0717 18:45:55.968547   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41199
	I0717 18:45:55.968768   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.968917   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.969414   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.969436   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.969548   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.969566   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.969814   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.970012   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.970235   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:55.970413   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.970462   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.970809   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44281
	I0717 18:45:55.971165   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.974130   81068 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.974155   81068 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:45:55.974184   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.974549   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.974578   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.981608   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.981640   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.982054   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.982711   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.982754   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.990665   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0717 18:45:55.991297   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.991922   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.991938   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.992213   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.992346   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:55.993952   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:55.996135   81068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:45:55.997555   81068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:55.997579   81068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:45:55.997602   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:55.998414   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0717 18:45:55.998963   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.999540   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.999554   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.000799   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I0717 18:45:56.001014   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.001096   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.001419   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:56.001512   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.001527   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.001755   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.001929   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.002102   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.002141   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:56.002178   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:56.002255   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.002686   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:56.002709   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.003047   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.003251   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:56.004660   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:56.006355   81068 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:45:56.007646   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:45:56.007663   81068 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:45:56.007678   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:56.010711   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.011169   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.011220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.011452   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.011637   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.011806   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.011932   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.021277   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0717 18:45:56.021980   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:56.022568   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:56.022585   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.022949   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.023127   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:56.025023   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:56.025443   81068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:56.025458   81068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:45:56.025476   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:56.028095   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.028450   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.028477   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.028666   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.028853   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.029081   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.029226   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.173482   81068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:45:56.194585   81068 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-022930" to be "Ready" ...
	I0717 18:45:56.203594   81068 node_ready.go:49] node "default-k8s-diff-port-022930" has status "Ready":"True"
	I0717 18:45:56.203614   81068 node_ready.go:38] duration metric: took 8.994875ms for node "default-k8s-diff-port-022930" to be "Ready" ...
	I0717 18:45:56.203623   81068 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:56.207834   81068 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.212424   81068 pod_ready.go:92] pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.212444   81068 pod_ready.go:81] duration metric: took 4.58857ms for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.212454   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.217013   81068 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.217031   81068 pod_ready.go:81] duration metric: took 4.569971ms for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.217040   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.221441   81068 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.221458   81068 pod_ready.go:81] duration metric: took 4.411121ms for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.221470   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hnb5v" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.268740   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:45:56.268765   81068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:45:56.290194   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:56.310957   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:45:56.310981   81068 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:45:56.352789   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:56.352821   81068 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:45:56.378402   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:56.379632   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:56.518737   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.518766   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.519075   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.519097   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:56.519108   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.519117   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.519340   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.519352   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.519383   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.519426   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:56.529290   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.529317   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.529618   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.529680   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.529697   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.386401   81068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.007961919s)
	I0717 18:45:57.386463   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.386480   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.386925   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.386980   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.386999   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.387017   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.386958   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.387283   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.387304   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731240   81068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.351571451s)
	I0717 18:45:57.731287   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.731300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.731616   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.731650   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.731664   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731672   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.731685   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.731905   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.731930   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731949   81068 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-022930"
	I0717 18:45:57.731960   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.734601   81068 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 18:45:53.693038   80180 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.028164403s)
	I0717 18:45:53.693099   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:53.709020   80180 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:53.718790   80180 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:53.728384   80180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:53.728405   80180 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:53.728444   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:45:53.737315   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:53.737384   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:53.746336   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:45:53.754297   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:53.754347   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:53.763252   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:45:53.772186   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:53.772229   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:53.780829   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:45:53.788899   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:53.788955   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:53.797324   80180 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:53.982580   80180 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:57.735769   81068 addons.go:510] duration metric: took 1.784634456s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 18:45:57.742312   81068 pod_ready.go:92] pod "kube-proxy-hnb5v" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:57.742333   81068 pod_ready.go:81] duration metric: took 1.520854667s for pod "kube-proxy-hnb5v" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.742344   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.809858   81068 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:57.809885   81068 pod_ready.go:81] duration metric: took 67.527182ms for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.809896   81068 pod_ready.go:38] duration metric: took 1.606263576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:57.809914   81068 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:45:57.809972   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:45:57.847337   81068 api_server.go:72] duration metric: took 1.896234247s to wait for apiserver process to appear ...
	I0717 18:45:57.847366   81068 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:45:57.847391   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:45:57.853537   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 200:
	ok
	I0717 18:45:57.856587   81068 api_server.go:141] control plane version: v1.30.2
	I0717 18:45:57.856661   81068 api_server.go:131] duration metric: took 9.286402ms to wait for apiserver health ...
	I0717 18:45:57.856684   81068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:45:58.002336   81068 system_pods.go:59] 9 kube-system pods found
	I0717 18:45:58.002374   81068 system_pods.go:61] "coredns-7db6d8ff4d-fp4tg" [dc66092c-9183-4630-93cc-6ec4aa59a928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.002383   81068 system_pods.go:61] "coredns-7db6d8ff4d-jn64r" [35cbef26-555a-4693-afac-c739d9238a04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.002396   81068 system_pods.go:61] "etcd-default-k8s-diff-port-022930" [f83fd844-0ede-4638-b8c6-2ecdecbf4345] Running
	I0717 18:45:58.002402   81068 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-022930" [19fa3a0a-ab56-4163-b39f-2b12ce65d490] Running
	I0717 18:45:58.002408   81068 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-022930" [0037b401-ce9b-41f3-89de-47608a46a228] Running
	I0717 18:45:58.002414   81068 system_pods.go:61] "kube-proxy-hnb5v" [b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d] Running
	I0717 18:45:58.002418   81068 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-022930" [21fa54d0-9d90-492c-b90c-e5070dd2e350] Running
	I0717 18:45:58.002425   81068 system_pods.go:61] "metrics-server-569cc877fc-pfmwt" [39616dfc-215e-4af5-90f7-12fc28304494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:58.002435   81068 system_pods.go:61] "storage-provisioner" [d9b11611-2008-4a15-a661-62809bd1d4c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:58.002452   81068 system_pods.go:74] duration metric: took 145.752129ms to wait for pod list to return data ...
	I0717 18:45:58.002463   81068 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:45:58.197223   81068 default_sa.go:45] found service account: "default"
	I0717 18:45:58.197250   81068 default_sa.go:55] duration metric: took 194.774408ms for default service account to be created ...
	I0717 18:45:58.197260   81068 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:45:58.401825   81068 system_pods.go:86] 9 kube-system pods found
	I0717 18:45:58.401878   81068 system_pods.go:89] "coredns-7db6d8ff4d-fp4tg" [dc66092c-9183-4630-93cc-6ec4aa59a928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.401891   81068 system_pods.go:89] "coredns-7db6d8ff4d-jn64r" [35cbef26-555a-4693-afac-c739d9238a04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.401904   81068 system_pods.go:89] "etcd-default-k8s-diff-port-022930" [f83fd844-0ede-4638-b8c6-2ecdecbf4345] Running
	I0717 18:45:58.401917   81068 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-022930" [19fa3a0a-ab56-4163-b39f-2b12ce65d490] Running
	I0717 18:45:58.401927   81068 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-022930" [0037b401-ce9b-41f3-89de-47608a46a228] Running
	I0717 18:45:58.401935   81068 system_pods.go:89] "kube-proxy-hnb5v" [b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d] Running
	I0717 18:45:58.401940   81068 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-022930" [21fa54d0-9d90-492c-b90c-e5070dd2e350] Running
	I0717 18:45:58.401948   81068 system_pods.go:89] "metrics-server-569cc877fc-pfmwt" [39616dfc-215e-4af5-90f7-12fc28304494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:58.401956   81068 system_pods.go:89] "storage-provisioner" [d9b11611-2008-4a15-a661-62809bd1d4c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:58.401965   81068 system_pods.go:126] duration metric: took 204.700297ms to wait for k8s-apps to be running ...
	I0717 18:45:58.401975   81068 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:45:58.402024   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:58.416020   81068 system_svc.go:56] duration metric: took 14.023536ms WaitForService to wait for kubelet
	I0717 18:45:58.416056   81068 kubeadm.go:582] duration metric: took 2.464957357s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:45:58.416079   81068 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:45:58.598829   81068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:45:58.598863   81068 node_conditions.go:123] node cpu capacity is 2
	I0717 18:45:58.598876   81068 node_conditions.go:105] duration metric: took 182.791383ms to run NodePressure ...
	I0717 18:45:58.598891   81068 start.go:241] waiting for startup goroutines ...
	I0717 18:45:58.598899   81068 start.go:246] waiting for cluster config update ...
	I0717 18:45:58.598912   81068 start.go:255] writing updated cluster config ...
	I0717 18:45:58.599267   81068 ssh_runner.go:195] Run: rm -f paused
	I0717 18:45:58.661380   81068 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:45:58.663085   81068 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-022930" cluster and "default" namespace by default
	I0717 18:46:02.558673   80180 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:46:02.558766   80180 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:02.558842   80180 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:02.558980   80180 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:02.559118   80180 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:02.559210   80180 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:02.561934   80180 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:02.562036   80180 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:02.562108   80180 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:02.562191   80180 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:02.562290   80180 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:02.562393   80180 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:02.562478   80180 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:02.562565   80180 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:02.562643   80180 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:02.562711   80180 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:02.562826   80180 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:02.562886   80180 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:02.562958   80180 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:02.563005   80180 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:02.563081   80180 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:46:02.563136   80180 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:02.563210   80180 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:02.563293   80180 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:02.563405   80180 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:02.563468   80180 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:02.564989   80180 out.go:204]   - Booting up control plane ...
	I0717 18:46:02.565092   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:02.565181   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:02.565270   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:02.565400   80180 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:02.565526   80180 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:02.565597   80180 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:02.565783   80180 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:46:02.565880   80180 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:46:02.565959   80180 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.323304ms
	I0717 18:46:02.566046   80180 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:46:02.566105   80180 kubeadm.go:310] [api-check] The API server is healthy after 5.002038309s
	I0717 18:46:02.566206   80180 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:46:02.566307   80180 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:46:02.566359   80180 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:46:02.566525   80180 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-527415 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:46:02.566575   80180 kubeadm.go:310] [bootstrap-token] Using token: xeax16.7z40teb0jswemrgg
	I0717 18:46:02.568038   80180 out.go:204]   - Configuring RBAC rules ...
	I0717 18:46:02.568120   80180 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:46:02.568194   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:46:02.568314   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:46:02.568449   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:46:02.568553   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:46:02.568660   80180 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:46:02.568807   80180 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:46:02.568877   80180 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:46:02.568926   80180 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:46:02.568936   80180 kubeadm.go:310] 
	I0717 18:46:02.569032   80180 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:46:02.569044   80180 kubeadm.go:310] 
	I0717 18:46:02.569108   80180 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:46:02.569114   80180 kubeadm.go:310] 
	I0717 18:46:02.569157   80180 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:46:02.569249   80180 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:46:02.569326   80180 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:46:02.569346   80180 kubeadm.go:310] 
	I0717 18:46:02.569432   80180 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:46:02.569442   80180 kubeadm.go:310] 
	I0717 18:46:02.569511   80180 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:46:02.569519   80180 kubeadm.go:310] 
	I0717 18:46:02.569599   80180 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:46:02.569695   80180 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:46:02.569790   80180 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:46:02.569797   80180 kubeadm.go:310] 
	I0717 18:46:02.569905   80180 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:46:02.569985   80180 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:46:02.569998   80180 kubeadm.go:310] 
	I0717 18:46:02.570096   80180 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xeax16.7z40teb0jswemrgg \
	I0717 18:46:02.570234   80180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:46:02.570264   80180 kubeadm.go:310] 	--control-plane 
	I0717 18:46:02.570273   80180 kubeadm.go:310] 
	I0717 18:46:02.570348   80180 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:46:02.570355   80180 kubeadm.go:310] 
	I0717 18:46:02.570429   80180 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xeax16.7z40teb0jswemrgg \
	I0717 18:46:02.570555   80180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:46:02.570569   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:46:02.570578   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:46:02.571934   80180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:46:02.573034   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:46:02.583253   80180 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:46:02.603658   80180 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:46:02.603745   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-527415 minikube.k8s.io/updated_at=2024_07_17T18_46_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=embed-certs-527415 minikube.k8s.io/primary=true
	I0717 18:46:02.603745   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:02.621414   80180 ops.go:34] apiserver oom_adj: -16
	I0717 18:46:02.792226   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:03.292632   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:03.792270   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:04.293220   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:04.793011   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:05.292596   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:05.793043   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:06.293286   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:06.793069   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:07.292569   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:07.792604   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:08.293028   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:08.792259   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:09.292273   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:09.792672   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.293080   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.792442   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.292894   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.792436   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.292411   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.792327   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.292909   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.792878   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.293188   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.793038   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.292453   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.792367   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.898487   80180 kubeadm.go:1113] duration metric: took 13.294815165s to wait for elevateKubeSystemPrivileges
	I0717 18:46:15.898528   80180 kubeadm.go:394] duration metric: took 5m13.234208822s to StartCluster
	I0717 18:46:15.898546   80180 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:15.898626   80180 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:46:15.900239   80180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:15.900462   80180 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:46:15.900564   80180 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:46:15.900648   80180 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:46:15.900655   80180 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-527415"
	I0717 18:46:15.900667   80180 addons.go:69] Setting default-storageclass=true in profile "embed-certs-527415"
	I0717 18:46:15.900691   80180 addons.go:69] Setting metrics-server=true in profile "embed-certs-527415"
	I0717 18:46:15.900704   80180 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-527415"
	I0717 18:46:15.900709   80180 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-527415"
	I0717 18:46:15.900714   80180 addons.go:234] Setting addon metrics-server=true in "embed-certs-527415"
	W0717 18:46:15.900747   80180 addons.go:243] addon metrics-server should already be in state true
	I0717 18:46:15.900777   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	W0717 18:46:15.900715   80180 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:46:15.900852   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:46:15.901106   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901150   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.901152   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901183   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.901264   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901298   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.902177   80180 out.go:177] * Verifying Kubernetes components...
	I0717 18:46:15.903698   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:46:15.918294   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40333
	I0717 18:46:15.918295   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I0717 18:46:15.918859   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.918909   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.919433   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.919455   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.919478   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40379
	I0717 18:46:15.919548   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.919572   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.919788   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.919875   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.919883   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.920316   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.920323   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.920338   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.920345   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.920387   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.920425   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.920695   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.920890   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.924623   80180 addons.go:234] Setting addon default-storageclass=true in "embed-certs-527415"
	W0717 18:46:15.924644   80180 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:46:15.924672   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:46:15.925801   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.925830   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.936020   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
	I0717 18:46:15.936280   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0717 18:46:15.936365   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.936674   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.937144   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.937164   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.937229   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.937239   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.937565   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.937587   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.937770   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.937872   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.939671   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.939856   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.941929   80180 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:46:15.941934   80180 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:46:15.943632   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:46:15.943650   80180 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:46:15.943668   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.943715   80180 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:15.943724   80180 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:46:15.943737   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.946283   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33675
	I0717 18:46:15.946815   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.947230   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.947240   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.947272   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.947953   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.947987   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948001   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.948179   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.948223   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948248   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.948388   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.948604   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:15.948627   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.948653   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948832   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.948870   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.948895   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.949086   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.949307   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.949454   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:15.969385   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0717 18:46:15.969789   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.970221   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.970241   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.970756   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.970963   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.972631   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.972849   80180 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:15.972868   80180 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:46:15.972889   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.975680   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.976123   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.976187   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.976320   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.976496   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.976657   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.976748   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:16.134605   80180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:46:16.206139   80180 node_ready.go:35] waiting up to 6m0s for node "embed-certs-527415" to be "Ready" ...
	I0717 18:46:16.214532   80180 node_ready.go:49] node "embed-certs-527415" has status "Ready":"True"
	I0717 18:46:16.214550   80180 node_ready.go:38] duration metric: took 8.382109ms for node "embed-certs-527415" to be "Ready" ...
	I0717 18:46:16.214568   80180 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:46:16.223573   80180 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:16.254146   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:46:16.254166   80180 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:46:16.293257   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:16.312304   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:16.334927   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:46:16.334949   80180 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:46:16.404696   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:16.404723   80180 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:46:16.462835   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:17.281062   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281088   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281062   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281157   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281395   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281402   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281415   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281415   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281424   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281427   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281432   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281436   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281676   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.281678   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281700   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281705   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.281722   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281732   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.300264   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.300294   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.300592   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.300643   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.300672   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.489477   80180 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026593042s)
	I0717 18:46:17.489520   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.489534   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.490020   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.490047   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.490055   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.490068   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.490077   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.490344   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.490373   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.490384   80180 addons.go:475] Verifying addon metrics-server=true in "embed-certs-527415"
	I0717 18:46:17.490397   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.492257   80180 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 18:46:17.493487   80180 addons.go:510] duration metric: took 1.592928152s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 18:46:18.230569   80180 pod_ready.go:92] pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.230592   80180 pod_ready.go:81] duration metric: took 2.006995421s for pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.230603   80180 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.235298   80180 pod_ready.go:92] pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.235317   80180 pod_ready.go:81] duration metric: took 4.707534ms for pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.235327   80180 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.238998   80180 pod_ready.go:92] pod "etcd-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.239015   80180 pod_ready.go:81] duration metric: took 3.681191ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.239023   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.242949   80180 pod_ready.go:92] pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.242967   80180 pod_ready.go:81] duration metric: took 3.937614ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.242977   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.246567   80180 pod_ready.go:92] pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.246580   80180 pod_ready.go:81] duration metric: took 3.597434ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.246588   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m52fq" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.628607   80180 pod_ready.go:92] pod "kube-proxy-m52fq" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.628636   80180 pod_ready.go:81] duration metric: took 382.042151ms for pod "kube-proxy-m52fq" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.628650   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:19.028536   80180 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:19.028558   80180 pod_ready.go:81] duration metric: took 399.900565ms for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:19.028565   80180 pod_ready.go:38] duration metric: took 2.813989212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:46:19.028578   80180 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:46:19.028630   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:46:19.044787   80180 api_server.go:72] duration metric: took 3.144295616s to wait for apiserver process to appear ...
	I0717 18:46:19.044810   80180 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:46:19.044825   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:46:19.051106   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:46:19.052094   80180 api_server.go:141] control plane version: v1.30.2
	I0717 18:46:19.052111   80180 api_server.go:131] duration metric: took 7.296406ms to wait for apiserver health ...
	I0717 18:46:19.052117   80180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:46:19.231877   80180 system_pods.go:59] 9 kube-system pods found
	I0717 18:46:19.231905   80180 system_pods.go:61] "coredns-7db6d8ff4d-2zt8k" [5e2e90bb-5721-4ca8-8177-77e6b686175a] Running
	I0717 18:46:19.231912   80180 system_pods.go:61] "coredns-7db6d8ff4d-f64kh" [f0de6ef4-1402-44b2-81f3-3f234a72d151] Running
	I0717 18:46:19.231916   80180 system_pods.go:61] "etcd-embed-certs-527415" [79d210fe-c4d9-476f-ab78-cce3b98c1c95] Running
	I0717 18:46:19.231921   80180 system_pods.go:61] "kube-apiserver-embed-certs-527415" [8b43654e-7127-4e43-91e6-1239bf66661d] Running
	I0717 18:46:19.231925   80180 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [55da9f4c-566b-4f82-a700-236d117bd9a4] Running
	I0717 18:46:19.231929   80180 system_pods.go:61] "kube-proxy-m52fq" [40f99883-b343-43b3-8f94-4b45b379a17b] Running
	I0717 18:46:19.231934   80180 system_pods.go:61] "kube-scheduler-embed-certs-527415" [e6031b0b-5aa6-4827-b41a-a422d05c0b9a] Running
	I0717 18:46:19.231942   80180 system_pods.go:61] "metrics-server-569cc877fc-hvxtg" [05a18f70-4284-4315-892e-2850ac8b5050] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:46:19.231947   80180 system_pods.go:61] "storage-provisioner" [5f473bbe-0727-4f25-ba39-4ed322767465] Running
	I0717 18:46:19.231957   80180 system_pods.go:74] duration metric: took 179.833729ms to wait for pod list to return data ...
	I0717 18:46:19.231966   80180 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:46:19.427972   80180 default_sa.go:45] found service account: "default"
	I0717 18:46:19.427994   80180 default_sa.go:55] duration metric: took 196.021611ms for default service account to be created ...
	I0717 18:46:19.428002   80180 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:46:19.630730   80180 system_pods.go:86] 9 kube-system pods found
	I0717 18:46:19.630755   80180 system_pods.go:89] "coredns-7db6d8ff4d-2zt8k" [5e2e90bb-5721-4ca8-8177-77e6b686175a] Running
	I0717 18:46:19.630760   80180 system_pods.go:89] "coredns-7db6d8ff4d-f64kh" [f0de6ef4-1402-44b2-81f3-3f234a72d151] Running
	I0717 18:46:19.630765   80180 system_pods.go:89] "etcd-embed-certs-527415" [79d210fe-c4d9-476f-ab78-cce3b98c1c95] Running
	I0717 18:46:19.630769   80180 system_pods.go:89] "kube-apiserver-embed-certs-527415" [8b43654e-7127-4e43-91e6-1239bf66661d] Running
	I0717 18:46:19.630774   80180 system_pods.go:89] "kube-controller-manager-embed-certs-527415" [55da9f4c-566b-4f82-a700-236d117bd9a4] Running
	I0717 18:46:19.630778   80180 system_pods.go:89] "kube-proxy-m52fq" [40f99883-b343-43b3-8f94-4b45b379a17b] Running
	I0717 18:46:19.630782   80180 system_pods.go:89] "kube-scheduler-embed-certs-527415" [e6031b0b-5aa6-4827-b41a-a422d05c0b9a] Running
	I0717 18:46:19.630788   80180 system_pods.go:89] "metrics-server-569cc877fc-hvxtg" [05a18f70-4284-4315-892e-2850ac8b5050] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:46:19.630792   80180 system_pods.go:89] "storage-provisioner" [5f473bbe-0727-4f25-ba39-4ed322767465] Running
	I0717 18:46:19.630800   80180 system_pods.go:126] duration metric: took 202.793522ms to wait for k8s-apps to be running ...
	I0717 18:46:19.630806   80180 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:46:19.630849   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:19.646111   80180 system_svc.go:56] duration metric: took 15.296964ms WaitForService to wait for kubelet
	I0717 18:46:19.646133   80180 kubeadm.go:582] duration metric: took 3.745647205s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:46:19.646149   80180 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:46:19.828333   80180 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:46:19.828356   80180 node_conditions.go:123] node cpu capacity is 2
	I0717 18:46:19.828368   80180 node_conditions.go:105] duration metric: took 182.213813ms to run NodePressure ...
	I0717 18:46:19.828381   80180 start.go:241] waiting for startup goroutines ...
	I0717 18:46:19.828389   80180 start.go:246] waiting for cluster config update ...
	I0717 18:46:19.828401   80180 start.go:255] writing updated cluster config ...
	I0717 18:46:19.828690   80180 ssh_runner.go:195] Run: rm -f paused
	I0717 18:46:19.877774   80180 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:46:19.879769   80180 out.go:177] * Done! kubectl is now configured to use "embed-certs-527415" cluster and "default" namespace by default
	I0717 18:46:33.124646   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:46:33.124790   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:46:33.126245   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.126307   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.126409   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.126547   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.126673   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:33.126734   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:33.128541   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:33.128626   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:33.128707   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:33.128817   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:33.128901   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:33.129018   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:33.129091   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:33.129172   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:33.129249   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:33.129339   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:33.129408   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:33.129444   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:33.129532   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:33.129603   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:33.129665   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:33.129765   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:33.129812   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:33.129929   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:33.130037   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:33.130093   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:33.130177   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:33.131546   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:33.131652   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:33.131750   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:33.131858   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:33.131939   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:33.132085   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:46:33.132133   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:46:33.132189   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132355   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132419   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132585   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132657   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132839   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132900   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133143   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133248   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133452   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133460   80857 kubeadm.go:310] 
	I0717 18:46:33.133494   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:46:33.133529   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:46:33.133535   80857 kubeadm.go:310] 
	I0717 18:46:33.133564   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:46:33.133599   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:46:33.133727   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:46:33.133752   80857 kubeadm.go:310] 
	I0717 18:46:33.133905   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:46:33.133947   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:46:33.134002   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:46:33.134012   80857 kubeadm.go:310] 
	I0717 18:46:33.134116   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:46:33.134186   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:46:33.134193   80857 kubeadm.go:310] 
	I0717 18:46:33.134290   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:46:33.134367   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:46:33.134431   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:46:33.134491   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:46:33.134533   80857 kubeadm.go:310] 
	W0717 18:46:33.134615   80857 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 18:46:33.134669   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:46:33.590879   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:33.605393   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:46:33.614382   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:46:33.614405   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:46:33.614450   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:46:33.622849   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:46:33.622905   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:46:33.631852   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:46:33.640160   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:46:33.640211   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:46:33.648774   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.656740   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:46:33.656796   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.665799   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:46:33.674492   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:46:33.674547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:46:33.683627   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:46:33.746405   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.746472   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.881152   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.881297   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.881443   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:34.053199   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:34.055757   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:34.055843   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:34.055918   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:34.056030   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:34.056129   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:34.056232   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:34.056336   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:34.056431   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:34.056524   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:34.056656   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:34.056764   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:34.056824   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:34.056900   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:34.276456   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:34.491418   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:34.702265   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:34.874511   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:34.895484   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:34.896451   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:34.896536   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:35.040208   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:35.042291   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:35.042437   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:35.042565   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:35.044391   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:35.046206   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:35.050843   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:47:15.053070   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:47:15.053416   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:15.053586   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:20.053963   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:20.054207   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:30.054801   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:30.055011   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:50.055270   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:50.055465   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.053919   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:48:30.054133   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.054148   80857 kubeadm.go:310] 
	I0717 18:48:30.054231   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:48:30.054300   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:48:30.054326   80857 kubeadm.go:310] 
	I0717 18:48:30.054386   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:48:30.054443   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:48:30.054581   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:48:30.054593   80857 kubeadm.go:310] 
	I0717 18:48:30.054715   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:48:30.054761   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:48:30.054810   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:48:30.054818   80857 kubeadm.go:310] 
	I0717 18:48:30.054970   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:48:30.055069   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:48:30.055081   80857 kubeadm.go:310] 
	I0717 18:48:30.055236   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:48:30.055332   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:48:30.055396   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:48:30.055457   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:48:30.055483   80857 kubeadm.go:310] 
	I0717 18:48:30.056139   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:48:30.056246   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:48:30.056338   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:48:30.056413   80857 kubeadm.go:394] duration metric: took 8m2.908780359s to StartCluster
	I0717 18:48:30.056461   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:48:30.056524   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:48:30.102640   80857 cri.go:89] found id: ""
	I0717 18:48:30.102662   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.102669   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:48:30.102674   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:48:30.102724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:48:30.142516   80857 cri.go:89] found id: ""
	I0717 18:48:30.142548   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.142559   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:48:30.142567   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:48:30.142630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:48:30.178558   80857 cri.go:89] found id: ""
	I0717 18:48:30.178589   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.178598   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:48:30.178604   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:48:30.178677   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:48:30.211146   80857 cri.go:89] found id: ""
	I0717 18:48:30.211177   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.211186   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:48:30.211192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:48:30.211242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:48:30.244287   80857 cri.go:89] found id: ""
	I0717 18:48:30.244308   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.244314   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:48:30.244319   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:48:30.244364   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:48:30.274547   80857 cri.go:89] found id: ""
	I0717 18:48:30.274577   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.274587   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:48:30.274594   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:48:30.274660   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:48:30.306796   80857 cri.go:89] found id: ""
	I0717 18:48:30.306825   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.306835   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:48:30.306842   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:48:30.306903   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:48:30.341938   80857 cri.go:89] found id: ""
	I0717 18:48:30.341962   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.341972   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:48:30.341982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:48:30.341997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:48:30.407881   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:48:30.407925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:48:30.430885   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:48:30.430913   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:48:30.525366   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:48:30.525394   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:48:30.525408   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:48:30.639556   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:48:30.639588   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 18:48:30.677493   80857 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 18:48:30.677544   80857 out.go:239] * 
	W0717 18:48:30.677604   80857 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.677636   80857 out.go:239] * 
	W0717 18:48:30.678483   80857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:48:30.681792   80857 out.go:177] 
	W0717 18:48:30.682976   80857 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.683034   80857 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 18:48:30.683050   80857 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 18:48:30.684325   80857 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.501217668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242112501199723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a48f275-b4f3-410b-99e6-86e48a7fd251 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.501694617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71897ab0-72b5-4e19-8efd-a5d682871128 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.501786537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71897ab0-72b5-4e19-8efd-a5d682871128 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.501818716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=71897ab0-72b5-4e19-8efd-a5d682871128 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.534614037Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=13f4b087-3a21-4ba5-8d15-12429fccb0cf name=/runtime.v1.RuntimeService/Version
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.534684500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13f4b087-3a21-4ba5-8d15-12429fccb0cf name=/runtime.v1.RuntimeService/Version
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.535834753Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ce1fa4b-e19a-4f63-8792-c91ce948c02b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.536207680Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242112536185393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ce1fa4b-e19a-4f63-8792-c91ce948c02b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.536808468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a613a62-60e0-49ec-916c-8f501664245b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.536875709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a613a62-60e0-49ec-916c-8f501664245b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.536912099Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8a613a62-60e0-49ec-916c-8f501664245b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.568559742Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3571ea43-7578-485a-b6ac-baf20db5ba23 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.568669484Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3571ea43-7578-485a-b6ac-baf20db5ba23 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.569625192Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=78c0a822-95a7-461c-9525-8fe25979308d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.570029637Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242112570009620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78c0a822-95a7-461c-9525-8fe25979308d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.570537103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ac2fdcf-1abd-4d1d-bdc8-55ed270740de name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.570606795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ac2fdcf-1abd-4d1d-bdc8-55ed270740de name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.570639639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0ac2fdcf-1abd-4d1d-bdc8-55ed270740de name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.602258364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0cc4e3e9-45ae-450c-acce-7bb7c18c87a6 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.602401867Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0cc4e3e9-45ae-450c-acce-7bb7c18c87a6 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.604211674Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8229ab8-b123-458e-968d-a61058fcde14 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.605591585Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242112605522827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8229ab8-b123-458e-968d-a61058fcde14 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.606279219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b645cadd-651b-4633-9643-c88fc6c9e7f9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.606332667Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b645cadd-651b-4633-9643-c88fc6c9e7f9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:48:32 old-k8s-version-019549 crio[648]: time="2024-07-17 18:48:32.606362943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b645cadd-651b-4633-9643-c88fc6c9e7f9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul17 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051628] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040768] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.517042] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.721932] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.548665] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.018518] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.058706] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069719] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.203391] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.148278] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.237346] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.350758] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.060103] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.283579] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[ +13.881143] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 18:44] systemd-fstab-generator[5065]: Ignoring "noauto" option for root device
	[Jul17 18:46] systemd-fstab-generator[5343]: Ignoring "noauto" option for root device
	[  +0.061949] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:48:32 up 8 min,  0 users,  load average: 0.06, 0.16, 0.10
	Linux old-k8s-version-019549 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000047180)
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]: goroutine 136 [syscall]:
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]: syscall.Syscall6(0xe8, 0xd, 0xc000c79b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xd, 0xc000c79b6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000961000, 0x0, 0x0, 0x0)
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0009c4370)
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Jul 17 18:48:29 old-k8s-version-019549 kubelet[5522]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Jul 17 18:48:29 old-k8s-version-019549 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 18:48:29 old-k8s-version-019549 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 18:48:30 old-k8s-version-019549 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 17 18:48:30 old-k8s-version-019549 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 18:48:30 old-k8s-version-019549 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 18:48:30 old-k8s-version-019549 kubelet[5572]: I0717 18:48:30.454272    5572 server.go:416] Version: v1.20.0
	Jul 17 18:48:30 old-k8s-version-019549 kubelet[5572]: I0717 18:48:30.454598    5572 server.go:837] Client rotation is on, will bootstrap in background
	Jul 17 18:48:30 old-k8s-version-019549 kubelet[5572]: I0717 18:48:30.456764    5572 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 18:48:30 old-k8s-version-019549 kubelet[5572]: W0717 18:48:30.457756    5572 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 17 18:48:30 old-k8s-version-019549 kubelet[5572]: I0717 18:48:30.457816    5572 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019549 -n old-k8s-version-019549
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019549 -n old-k8s-version-019549: exit status 2 (219.917526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-019549" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (709.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-022930 -n default-k8s-diff-port-022930
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-022930 -n default-k8s-diff-port-022930: exit status 3 (3.167772111s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:37:05.217294   80957 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host
	E0717 18:37:05.217320   80957 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-022930 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0717 18:37:09.805679   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:37:10.367381   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-022930 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153095132s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-022930 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-022930 -n default-k8s-diff-port-022930
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-022930 -n default-k8s-diff-port-022930: exit status 3 (3.062859802s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 18:37:14.433422   81037 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host
	E0717 18:37:14.433446   81037 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.245:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-022930" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 18:45:41.791900   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 18:45:58.171845   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-066175 -n no-preload-066175
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-17 18:54:33.53094121 +0000 UTC m=+6188.260137178
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-066175 -n no-preload-066175
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-066175 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-066175 logs -n 25: (2.082650332s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-527415            | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-371172                                        | pause-371172                 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-341716 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | disable-driver-mounts-341716                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:34 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-066175             | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC | 17 Jul 24 18:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-066175                                   | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-022930  | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC | 17 Jul 24 18:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-527415                 | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-019549        | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-066175                  | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-066175 --memory=2200                     | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:45 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-019549             | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-022930       | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC | 17 Jul 24 18:45 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:37:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:37:14.473404   81068 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:37:14.473526   81068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:14.473535   81068 out.go:304] Setting ErrFile to fd 2...
	I0717 18:37:14.473540   81068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:14.473714   81068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:37:14.474251   81068 out.go:298] Setting JSON to false
	I0717 18:37:14.475115   81068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8377,"bootTime":1721233057,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:37:14.475172   81068 start.go:139] virtualization: kvm guest
	I0717 18:37:14.477356   81068 out.go:177] * [default-k8s-diff-port-022930] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:37:14.478600   81068 notify.go:220] Checking for updates...
	I0717 18:37:14.478615   81068 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:37:14.480094   81068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:37:14.481516   81068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:37:14.482886   81068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:37:14.484159   81068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:37:14.485449   81068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:37:14.487164   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:37:14.487744   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:14.487795   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:14.502368   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0717 18:37:14.502712   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:14.503192   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:37:14.503213   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:14.503574   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:14.503778   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:37:14.504032   81068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:37:14.504326   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:14.504381   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:14.518330   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0717 18:37:14.518718   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:14.519095   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:37:14.519114   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:14.519409   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:14.519578   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:37:14.549923   81068 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:37:14.551160   81068 start.go:297] selected driver: kvm2
	I0717 18:37:14.551175   81068 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:37:14.551302   81068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:37:14.551931   81068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:37:14.552008   81068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:37:14.566038   81068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:37:14.566371   81068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:37:14.566443   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:37:14.566466   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:37:14.566516   81068 start.go:340] cluster config:
	{Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:37:14.566643   81068 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:37:14.568602   81068 out.go:177] * Starting "default-k8s-diff-port-022930" primary control-plane node in "default-k8s-diff-port-022930" cluster
	I0717 18:37:13.057187   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:16.129274   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:14.569868   81068 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:37:14.569908   81068 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:37:14.569919   81068 cache.go:56] Caching tarball of preloaded images
	I0717 18:37:14.569992   81068 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:37:14.570003   81068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:37:14.570100   81068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/config.json ...
	I0717 18:37:14.570277   81068 start.go:360] acquireMachinesLock for default-k8s-diff-port-022930: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:37:22.209207   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:25.281226   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:31.361221   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:34.433258   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:40.513234   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:43.585225   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:49.665198   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:52.737256   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:58.817201   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:01.889213   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:07.969247   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:11.041264   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:17.121227   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:20.193250   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:26.273206   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:29.345193   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:35.425259   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:38.497261   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:44.577185   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:47.649306   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:53.729234   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:56.801257   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:02.881239   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:05.953258   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:12.033251   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:15.105230   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:21.185200   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:24.257195   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:30.337181   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:33.409224   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:39.489219   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:42.561250   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:45.565739   80401 start.go:364] duration metric: took 4m11.345351864s to acquireMachinesLock for "no-preload-066175"
	I0717 18:39:45.565801   80401 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:39:45.565807   80401 fix.go:54] fixHost starting: 
	I0717 18:39:45.566167   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:39:45.566198   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:39:45.580996   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45665
	I0717 18:39:45.581389   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:39:45.581797   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:39:45.581817   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:39:45.582145   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:39:45.582323   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:39:45.582467   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:39:45.584074   80401 fix.go:112] recreateIfNeeded on no-preload-066175: state=Stopped err=<nil>
	I0717 18:39:45.584109   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	W0717 18:39:45.584260   80401 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:39:45.586842   80401 out.go:177] * Restarting existing kvm2 VM for "no-preload-066175" ...
	I0717 18:39:45.563046   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:39:45.563105   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:39:45.563521   80180 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:39:45.563555   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:39:45.563758   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:39:45.565594   80180 machine.go:97] duration metric: took 4m37.427146226s to provisionDockerMachine
	I0717 18:39:45.565643   80180 fix.go:56] duration metric: took 4m37.448013968s for fixHost
	I0717 18:39:45.565651   80180 start.go:83] releasing machines lock for "embed-certs-527415", held for 4m37.448033785s
	W0717 18:39:45.565675   80180 start.go:714] error starting host: provision: host is not running
	W0717 18:39:45.565775   80180 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 18:39:45.565784   80180 start.go:729] Will try again in 5 seconds ...
	I0717 18:39:45.587901   80401 main.go:141] libmachine: (no-preload-066175) Calling .Start
	I0717 18:39:45.588046   80401 main.go:141] libmachine: (no-preload-066175) Ensuring networks are active...
	I0717 18:39:45.588666   80401 main.go:141] libmachine: (no-preload-066175) Ensuring network default is active
	I0717 18:39:45.589012   80401 main.go:141] libmachine: (no-preload-066175) Ensuring network mk-no-preload-066175 is active
	I0717 18:39:45.589386   80401 main.go:141] libmachine: (no-preload-066175) Getting domain xml...
	I0717 18:39:45.589959   80401 main.go:141] libmachine: (no-preload-066175) Creating domain...
	I0717 18:39:46.785717   80401 main.go:141] libmachine: (no-preload-066175) Waiting to get IP...
	I0717 18:39:46.786495   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:46.786912   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:46.786974   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:46.786888   81612 retry.go:31] will retry after 301.458026ms: waiting for machine to come up
	I0717 18:39:47.090556   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.091129   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.091154   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.091098   81612 retry.go:31] will retry after 347.107185ms: waiting for machine to come up
	I0717 18:39:47.439530   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.440010   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.440033   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.439947   81612 retry.go:31] will retry after 436.981893ms: waiting for machine to come up
	I0717 18:39:47.878684   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.879091   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.879120   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.879051   81612 retry.go:31] will retry after 582.942833ms: waiting for machine to come up
	I0717 18:39:48.464068   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:48.464568   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:48.464593   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:48.464513   81612 retry.go:31] will retry after 633.101908ms: waiting for machine to come up
	I0717 18:39:49.099383   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:49.099762   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:49.099784   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:49.099720   81612 retry.go:31] will retry after 847.181679ms: waiting for machine to come up
	I0717 18:39:50.567294   80180 start.go:360] acquireMachinesLock for embed-certs-527415: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:39:49.948696   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:49.949228   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:49.949260   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:49.949188   81612 retry.go:31] will retry after 1.048891217s: waiting for machine to come up
	I0717 18:39:50.999658   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:51.000062   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:51.000099   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:51.000001   81612 retry.go:31] will retry after 942.285454ms: waiting for machine to come up
	I0717 18:39:51.944171   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:51.944676   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:51.944702   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:51.944632   81612 retry.go:31] will retry after 1.21768861s: waiting for machine to come up
	I0717 18:39:53.163883   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:53.164345   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:53.164368   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:53.164305   81612 retry.go:31] will retry after 1.505905193s: waiting for machine to come up
	I0717 18:39:54.671532   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:54.671951   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:54.671977   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:54.671918   81612 retry.go:31] will retry after 2.885547597s: waiting for machine to come up
	I0717 18:39:57.560375   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:57.560878   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:57.560902   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:57.560830   81612 retry.go:31] will retry after 3.53251124s: waiting for machine to come up
	I0717 18:40:02.249487   80857 start.go:364] duration metric: took 3m17.095542929s to acquireMachinesLock for "old-k8s-version-019549"
	I0717 18:40:02.249548   80857 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:02.249556   80857 fix.go:54] fixHost starting: 
	I0717 18:40:02.249946   80857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:02.249976   80857 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:02.269365   80857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0717 18:40:02.269715   80857 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:02.270182   80857 main.go:141] libmachine: Using API Version  1
	I0717 18:40:02.270205   80857 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:02.270534   80857 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:02.270738   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:02.270875   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetState
	I0717 18:40:02.272408   80857 fix.go:112] recreateIfNeeded on old-k8s-version-019549: state=Stopped err=<nil>
	I0717 18:40:02.272443   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	W0717 18:40:02.272597   80857 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:02.274702   80857 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-019549" ...
	I0717 18:40:01.094975   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.095556   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has current primary IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.095579   80401 main.go:141] libmachine: (no-preload-066175) Found IP for machine: 192.168.72.216
	I0717 18:40:01.095592   80401 main.go:141] libmachine: (no-preload-066175) Reserving static IP address...
	I0717 18:40:01.095955   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.095980   80401 main.go:141] libmachine: (no-preload-066175) DBG | skip adding static IP to network mk-no-preload-066175 - found existing host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"}
	I0717 18:40:01.095989   80401 main.go:141] libmachine: (no-preload-066175) Reserved static IP address: 192.168.72.216
	I0717 18:40:01.096000   80401 main.go:141] libmachine: (no-preload-066175) Waiting for SSH to be available...
	I0717 18:40:01.096010   80401 main.go:141] libmachine: (no-preload-066175) DBG | Getting to WaitForSSH function...
	I0717 18:40:01.098163   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.098498   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.098521   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.098631   80401 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH client type: external
	I0717 18:40:01.098657   80401 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa (-rw-------)
	I0717 18:40:01.098692   80401 main.go:141] libmachine: (no-preload-066175) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:01.098707   80401 main.go:141] libmachine: (no-preload-066175) DBG | About to run SSH command:
	I0717 18:40:01.098720   80401 main.go:141] libmachine: (no-preload-066175) DBG | exit 0
	I0717 18:40:01.216740   80401 main.go:141] libmachine: (no-preload-066175) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:01.217099   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetConfigRaw
	I0717 18:40:01.217706   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:01.220160   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.220461   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.220492   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.220656   80401 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/config.json ...
	I0717 18:40:01.220843   80401 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:01.220860   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:01.221067   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.223044   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.223347   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.223371   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.223531   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.223719   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.223864   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.223980   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.224125   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.224332   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.224345   80401 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:01.321053   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:01.321083   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.321333   80401 buildroot.go:166] provisioning hostname "no-preload-066175"
	I0717 18:40:01.321359   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.321529   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.323945   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.324269   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.324297   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.324421   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.324582   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.324724   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.324837   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.324996   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.325162   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.325175   80401 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-066175 && echo "no-preload-066175" | sudo tee /etc/hostname
	I0717 18:40:01.435003   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-066175
	
	I0717 18:40:01.435033   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.437795   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.438113   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.438155   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.438344   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.438533   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.438692   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.438803   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.438948   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.439094   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.439108   80401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-066175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-066175/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-066175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:01.540598   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:01.540631   80401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:01.540650   80401 buildroot.go:174] setting up certificates
	I0717 18:40:01.540660   80401 provision.go:84] configureAuth start
	I0717 18:40:01.540669   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.540977   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:01.543503   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.543788   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.543817   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.543907   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.545954   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.546261   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.546280   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.546415   80401 provision.go:143] copyHostCerts
	I0717 18:40:01.546483   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:01.546498   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:01.546596   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:01.546730   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:01.546743   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:01.546788   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:01.546878   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:01.546888   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:01.546921   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:01.547054   80401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.no-preload-066175 san=[127.0.0.1 192.168.72.216 localhost minikube no-preload-066175]
	I0717 18:40:01.628522   80401 provision.go:177] copyRemoteCerts
	I0717 18:40:01.628574   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:01.628596   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.631306   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.631714   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.631761   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.631876   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.632050   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.632210   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.632330   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:01.711344   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:01.738565   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 18:40:01.765888   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:40:01.790852   80401 provision.go:87] duration metric: took 250.181586ms to configureAuth
	I0717 18:40:01.790874   80401 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:01.791046   80401 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:40:01.791111   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.793530   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.793922   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.793945   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.794095   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.794323   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.794497   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.794635   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.794786   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.794955   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.794969   80401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:02.032506   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:02.032543   80401 machine.go:97] duration metric: took 811.687511ms to provisionDockerMachine
	I0717 18:40:02.032554   80401 start.go:293] postStartSetup for "no-preload-066175" (driver="kvm2")
	I0717 18:40:02.032567   80401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:02.032596   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.032921   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:02.032966   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.035429   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.035731   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.035767   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.035921   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.036081   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.036351   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.036493   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.114601   80401 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:02.118230   80401 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:02.118247   80401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:02.118308   80401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:02.118384   80401 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:02.118592   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:02.126753   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:02.148028   80401 start.go:296] duration metric: took 115.461293ms for postStartSetup
	I0717 18:40:02.148066   80401 fix.go:56] duration metric: took 16.582258787s for fixHost
	I0717 18:40:02.148084   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.150550   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.150917   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.150949   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.151061   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.151242   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.151394   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.151513   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.151658   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:02.151828   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:02.151841   80401 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:02.249303   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241602.223072082
	
	I0717 18:40:02.249334   80401 fix.go:216] guest clock: 1721241602.223072082
	I0717 18:40:02.249344   80401 fix.go:229] Guest: 2024-07-17 18:40:02.223072082 +0000 UTC Remote: 2024-07-17 18:40:02.14806999 +0000 UTC m=+268.060359078 (delta=75.002092ms)
	I0717 18:40:02.249388   80401 fix.go:200] guest clock delta is within tolerance: 75.002092ms
	I0717 18:40:02.249396   80401 start.go:83] releasing machines lock for "no-preload-066175", held for 16.683615057s
	I0717 18:40:02.249442   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.249735   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:02.252545   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.252896   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.252929   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.253053   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253516   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253700   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253770   80401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:02.253803   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.253913   80401 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:02.253937   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.256152   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256462   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.256501   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256558   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.256616   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256718   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.256879   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.257013   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.257021   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.257038   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.257158   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.257312   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.257469   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.257604   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.376103   80401 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:02.381639   80401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:02.529357   80401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:02.536396   80401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:02.536463   80401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:02.555045   80401 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:02.555067   80401 start.go:495] detecting cgroup driver to use...
	I0717 18:40:02.555130   80401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:02.570540   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:02.583804   80401 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:02.583867   80401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:02.596657   80401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:02.610371   80401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:02.717489   80401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:02.875146   80401 docker.go:233] disabling docker service ...
	I0717 18:40:02.875235   80401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:02.895657   80401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:02.908366   80401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:03.018375   80401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:03.143922   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:03.160599   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:03.180643   80401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 18:40:03.180709   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.190040   80401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:03.190097   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.199275   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.208647   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.217750   80401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:03.226808   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.235779   80401 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.251451   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.261476   80401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:03.269978   80401 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:03.270028   80401 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:03.280901   80401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:03.290184   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:03.409167   80401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:03.541153   80401 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:03.541218   80401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:03.546012   80401 start.go:563] Will wait 60s for crictl version
	I0717 18:40:03.546059   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:03.549567   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:03.588396   80401 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:03.588467   80401 ssh_runner.go:195] Run: crio --version
	I0717 18:40:03.622472   80401 ssh_runner.go:195] Run: crio --version
	I0717 18:40:03.652180   80401 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 18:40:03.653613   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:03.656560   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:03.656959   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:03.656987   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:03.657222   80401 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:03.661102   80401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:03.673078   80401 kubeadm.go:883] updating cluster {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:03.673212   80401 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 18:40:03.673248   80401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:03.703959   80401 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 18:40:03.703986   80401 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:40:03.704042   80401 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:03.704078   80401 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.704095   80401 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.704114   80401 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.704150   80401 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.704077   80401 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.704168   80401 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 18:40:03.704243   80401 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.705787   80401 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:03.705795   80401 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.705801   80401 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.705787   80401 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.705792   80401 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.705816   80401 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.705829   80401 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 18:40:03.706094   80401 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.925413   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.930827   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 18:40:03.963901   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.964215   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.966162   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.970852   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.973664   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.997849   80401 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 18:40:03.997912   80401 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.997969   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118851   80401 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 18:40:04.118888   80401 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:04.118892   80401 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 18:40:04.118924   80401 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:04.118934   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118943   80401 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 18:40:04.118969   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118969   80401 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:04.119001   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.119027   80401 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 18:40:04.119058   80401 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:04.119089   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:04.119104   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.119065   80401 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 18:40:04.119136   80401 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:04.119159   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:02.275985   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .Start
	I0717 18:40:02.276143   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring networks are active...
	I0717 18:40:02.276898   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network default is active
	I0717 18:40:02.277333   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network mk-old-k8s-version-019549 is active
	I0717 18:40:02.277796   80857 main.go:141] libmachine: (old-k8s-version-019549) Getting domain xml...
	I0717 18:40:02.278481   80857 main.go:141] libmachine: (old-k8s-version-019549) Creating domain...
	I0717 18:40:03.571325   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting to get IP...
	I0717 18:40:03.572359   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.572836   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.572968   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.572816   81751 retry.go:31] will retry after 301.991284ms: waiting for machine to come up
	I0717 18:40:03.876263   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.876688   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.876715   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.876637   81751 retry.go:31] will retry after 286.461163ms: waiting for machine to come up
	I0717 18:40:04.165366   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.165873   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.165902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.165811   81751 retry.go:31] will retry after 383.479108ms: waiting for machine to come up
	I0717 18:40:04.551152   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.551615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.551650   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.551589   81751 retry.go:31] will retry after 429.076714ms: waiting for machine to come up
	I0717 18:40:04.982157   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.982517   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.982545   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.982470   81751 retry.go:31] will retry after 553.684035ms: waiting for machine to come up
	I0717 18:40:04.122952   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:04.130590   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:04.130741   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:04.200609   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:04.200631   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:04.200643   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 18:40:04.200728   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:04.200741   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.200815   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.212034   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 18:40:04.212057   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 18:40:04.212113   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:04.212123   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:04.259447   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:04.259525   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 18:40:04.259548   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.259552   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:04.259553   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 18:40:04.259534   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:04.259588   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.259591   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 18:40:04.259628   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 18:40:04.259639   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:04.550060   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:06.236639   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.976976668s)
	I0717 18:40:06.236683   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 18:40:06.236691   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.97711629s)
	I0717 18:40:06.236718   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 18:40:06.236732   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.977125153s)
	I0717 18:40:06.236752   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 18:40:06.236776   80401 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:06.236854   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:06.236781   80401 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.68669473s)
	I0717 18:40:06.236908   80401 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 18:40:06.236951   80401 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:06.236994   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:08.107122   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.870244887s)
	I0717 18:40:08.107152   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 18:40:08.107175   80401 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:08.107203   80401 ssh_runner.go:235] Completed: which crictl: (1.870188554s)
	I0717 18:40:08.107224   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:08.107261   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:08.146817   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 18:40:08.146932   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:05.538229   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:05.538753   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:05.538777   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:05.538702   81751 retry.go:31] will retry after 747.130907ms: waiting for machine to come up
	I0717 18:40:06.287146   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:06.287626   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:06.287665   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:06.287581   81751 retry.go:31] will retry after 1.171580264s: waiting for machine to come up
	I0717 18:40:07.461393   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:07.462015   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:07.462046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:07.461963   81751 retry.go:31] will retry after 1.199265198s: waiting for machine to come up
	I0717 18:40:08.663340   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:08.663789   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:08.663815   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:08.663745   81751 retry.go:31] will retry after 1.621895351s: waiting for machine to come up
	I0717 18:40:11.404193   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.296944718s)
	I0717 18:40:11.404228   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 18:40:11.404248   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:11.404245   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.257289666s)
	I0717 18:40:11.404272   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 18:40:11.404294   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:13.370389   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966067238s)
	I0717 18:40:13.370426   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 18:40:13.370455   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:13.370505   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:10.287596   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:10.288019   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:10.288046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:10.287964   81751 retry.go:31] will retry after 1.748504204s: waiting for machine to come up
	I0717 18:40:12.038137   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:12.038582   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:12.038615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:12.038532   81751 retry.go:31] will retry after 2.477996004s: waiting for machine to come up
	I0717 18:40:14.517788   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:14.518175   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:14.518203   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:14.518123   81751 retry.go:31] will retry after 3.29313184s: waiting for machine to come up
	I0717 18:40:19.093608   81068 start.go:364] duration metric: took 3m4.523289209s to acquireMachinesLock for "default-k8s-diff-port-022930"
	I0717 18:40:19.093694   81068 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:19.093705   81068 fix.go:54] fixHost starting: 
	I0717 18:40:19.094122   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:19.094157   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:19.113793   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0717 18:40:19.114236   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:19.114755   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:40:19.114775   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:19.115110   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:19.115294   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:19.115434   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:40:19.117072   81068 fix.go:112] recreateIfNeeded on default-k8s-diff-port-022930: state=Stopped err=<nil>
	I0717 18:40:19.117109   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	W0717 18:40:19.117256   81068 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:19.120986   81068 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-022930" ...
	I0717 18:40:15.214734   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.844202729s)
	I0717 18:40:15.214756   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 18:40:15.214777   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:15.214814   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:17.066570   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.851726063s)
	I0717 18:40:17.066604   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 18:40:17.066629   80401 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:17.066679   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:17.703556   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 18:40:17.703614   80401 cache_images.go:123] Successfully loaded all cached images
	I0717 18:40:17.703624   80401 cache_images.go:92] duration metric: took 13.999623105s to LoadCachedImages
	I0717 18:40:17.703638   80401 kubeadm.go:934] updating node { 192.168.72.216 8443 v1.31.0-beta.0 crio true true} ...
	I0717 18:40:17.703754   80401 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-066175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:17.703830   80401 ssh_runner.go:195] Run: crio config
	I0717 18:40:17.753110   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:40:17.753138   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:17.753159   80401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:17.753190   80401 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.216 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-066175 NodeName:no-preload-066175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:40:17.753404   80401 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-066175"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:17.753492   80401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 18:40:17.763417   80401 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:17.763491   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:17.772139   80401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 18:40:17.786982   80401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 18:40:17.801327   80401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 18:40:17.816796   80401 ssh_runner.go:195] Run: grep 192.168.72.216	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:17.820354   80401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:17.834155   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:17.970222   80401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:17.989953   80401 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175 for IP: 192.168.72.216
	I0717 18:40:17.989977   80401 certs.go:194] generating shared ca certs ...
	I0717 18:40:17.989998   80401 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:17.990160   80401 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:17.990217   80401 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:17.990231   80401 certs.go:256] generating profile certs ...
	I0717 18:40:17.990365   80401 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.key
	I0717 18:40:17.990460   80401 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672
	I0717 18:40:17.990509   80401 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key
	I0717 18:40:17.990679   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:17.990723   80401 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:17.990740   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:17.990772   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:17.990813   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:17.990846   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:17.990905   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:17.991590   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:18.035349   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:18.079539   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:18.110382   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:18.135920   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:40:18.168675   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:18.196132   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:18.230418   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:40:18.254319   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:18.277293   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:18.301416   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:18.330021   80401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:18.348803   80401 ssh_runner.go:195] Run: openssl version
	I0717 18:40:18.355126   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:18.366004   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.370221   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.370287   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.375799   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:18.385991   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:18.396141   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.400451   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.400526   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.406203   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:18.419059   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:18.429450   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.433742   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.433794   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.439261   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:18.450327   80401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:18.454734   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:18.460256   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:18.465766   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:18.471349   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:18.476780   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:18.482509   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:18.488138   80401 kubeadm.go:392] StartCluster: {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:18.488229   80401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:18.488270   80401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:18.532219   80401 cri.go:89] found id: ""
	I0717 18:40:18.532318   80401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:18.542632   80401 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:18.542655   80401 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:18.542699   80401 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:18.552352   80401 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:18.553351   80401 kubeconfig.go:125] found "no-preload-066175" server: "https://192.168.72.216:8443"
	I0717 18:40:18.555295   80401 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:18.565857   80401 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.216
	I0717 18:40:18.565892   80401 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:18.565905   80401 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:18.565958   80401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:18.605512   80401 cri.go:89] found id: ""
	I0717 18:40:18.605593   80401 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:18.622235   80401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:18.633175   80401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:18.633196   80401 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:18.633241   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:40:18.641969   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:18.642023   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:18.651017   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:40:18.659619   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:18.659667   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:18.668008   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:40:18.675985   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:18.676037   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:18.685937   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:40:18.695574   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:18.695624   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:18.706040   80401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:18.717397   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:18.836009   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:19.122366   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Start
	I0717 18:40:19.122530   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring networks are active...
	I0717 18:40:19.123330   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring network default is active
	I0717 18:40:19.123832   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring network mk-default-k8s-diff-port-022930 is active
	I0717 18:40:19.124268   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Getting domain xml...
	I0717 18:40:19.124922   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Creating domain...
	I0717 18:40:17.813673   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814213   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has current primary IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814242   80857 main.go:141] libmachine: (old-k8s-version-019549) Found IP for machine: 192.168.39.128
	I0717 18:40:17.814277   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserving static IP address...
	I0717 18:40:17.814720   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserved static IP address: 192.168.39.128
	I0717 18:40:17.814738   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting for SSH to be available...
	I0717 18:40:17.814762   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.814783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | skip adding static IP to network mk-old-k8s-version-019549 - found existing host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"}
	I0717 18:40:17.814796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Getting to WaitForSSH function...
	I0717 18:40:17.817314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817714   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.817743   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH client type: external
	I0717 18:40:17.817944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa (-rw-------)
	I0717 18:40:17.817971   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:17.817984   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | About to run SSH command:
	I0717 18:40:17.818000   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | exit 0
	I0717 18:40:17.945902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:17.946262   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetConfigRaw
	I0717 18:40:17.946907   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:17.949757   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950158   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.950178   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950474   80857 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/config.json ...
	I0717 18:40:17.950706   80857 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:17.950728   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:17.950941   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:17.953738   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954141   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.954184   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954282   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:17.954456   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954617   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954790   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:17.954957   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:17.955121   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:17.955131   80857 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:18.061082   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:18.061113   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061405   80857 buildroot.go:166] provisioning hostname "old-k8s-version-019549"
	I0717 18:40:18.061432   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061685   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.064855   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.065348   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065537   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.065777   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.065929   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.066118   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.066329   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.066547   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.066564   80857 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-019549 && echo "old-k8s-version-019549" | sudo tee /etc/hostname
	I0717 18:40:18.191467   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-019549
	
	I0717 18:40:18.191517   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.194917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195455   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.195502   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195714   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.195908   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196105   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196288   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.196483   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.196708   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.196731   80857 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-019549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-019549/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-019549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:18.315020   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:18.315047   80857 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:18.315065   80857 buildroot.go:174] setting up certificates
	I0717 18:40:18.315078   80857 provision.go:84] configureAuth start
	I0717 18:40:18.315090   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.315358   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:18.318342   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.318796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.318826   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.319078   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.321562   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.321914   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.321944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.322125   80857 provision.go:143] copyHostCerts
	I0717 18:40:18.322208   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:18.322226   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:18.322309   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:18.322443   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:18.322457   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:18.322492   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:18.322579   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:18.322591   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:18.322621   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:18.322727   80857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-019549 san=[127.0.0.1 192.168.39.128 localhost minikube old-k8s-version-019549]
	I0717 18:40:18.397216   80857 provision.go:177] copyRemoteCerts
	I0717 18:40:18.397266   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:18.397301   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.399887   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400237   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.400286   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400531   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.400732   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.400880   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.401017   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.490677   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:18.518392   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 18:40:18.543930   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:18.567339   80857 provision.go:87] duration metric: took 252.250106ms to configureAuth
	I0717 18:40:18.567360   80857 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:18.567539   80857 config.go:182] Loaded profile config "old-k8s-version-019549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 18:40:18.567610   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.570373   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.570809   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570943   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.571140   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571281   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.571624   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.571841   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.571862   80857 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:18.845725   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:18.845752   80857 machine.go:97] duration metric: took 895.03234ms to provisionDockerMachine
	I0717 18:40:18.845765   80857 start.go:293] postStartSetup for "old-k8s-version-019549" (driver="kvm2")
	I0717 18:40:18.845778   80857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:18.845828   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:18.846158   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:18.846192   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.848760   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849264   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.849293   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.849649   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.849843   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.850007   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.938026   80857 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:18.943223   80857 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:18.943254   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:18.943317   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:18.943417   80857 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:18.943509   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:18.954887   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:18.976980   80857 start.go:296] duration metric: took 131.200877ms for postStartSetup
	I0717 18:40:18.977022   80857 fix.go:56] duration metric: took 16.727466541s for fixHost
	I0717 18:40:18.977041   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.980020   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980384   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.980417   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980533   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.980723   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.980903   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.981059   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.981207   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.981406   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.981418   80857 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:19.093409   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241619.063415252
	
	I0717 18:40:19.093433   80857 fix.go:216] guest clock: 1721241619.063415252
	I0717 18:40:19.093443   80857 fix.go:229] Guest: 2024-07-17 18:40:19.063415252 +0000 UTC Remote: 2024-07-17 18:40:18.97702579 +0000 UTC m=+213.960604949 (delta=86.389462ms)
	I0717 18:40:19.093494   80857 fix.go:200] guest clock delta is within tolerance: 86.389462ms
	I0717 18:40:19.093506   80857 start.go:83] releasing machines lock for "old-k8s-version-019549", held for 16.843984035s
	I0717 18:40:19.093543   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.093842   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:19.096443   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.096817   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.096848   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.097035   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097579   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097769   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097859   80857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:19.097915   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.098007   80857 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:19.098031   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.100775   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101108   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101160   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101185   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101412   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101595   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.101606   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101637   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101718   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.101789   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101853   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.101975   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.102092   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.102212   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.218596   80857 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:19.225675   80857 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:19.371453   80857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:19.381365   80857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:19.381438   80857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:19.397504   80857 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:19.397530   80857 start.go:495] detecting cgroup driver to use...
	I0717 18:40:19.397597   80857 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:19.412150   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:19.425495   80857 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:19.425578   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:19.438662   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:19.451953   80857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:19.578702   80857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:19.733328   80857 docker.go:233] disabling docker service ...
	I0717 18:40:19.733411   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:19.753615   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:19.774057   80857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:19.933901   80857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:20.049914   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:20.063500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:20.082560   80857 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 18:40:20.082611   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.092857   80857 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:20.092912   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.103283   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.112612   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.122671   80857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:20.132892   80857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:20.145445   80857 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:20.145501   80857 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:20.158958   80857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:20.168377   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:20.307224   80857 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:20.453407   80857 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:20.453490   80857 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:20.458007   80857 start.go:563] Will wait 60s for crictl version
	I0717 18:40:20.458062   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:20.461420   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:20.507358   80857 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:20.507426   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.542812   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.577280   80857 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 18:40:20.432028   80401 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.59597321s)
	I0717 18:40:20.432063   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.633854   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.728474   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.879989   80401 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:20.880079   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.380421   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.880208   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.912390   80401 api_server.go:72] duration metric: took 1.032400417s to wait for apiserver process to appear ...
	I0717 18:40:21.912419   80401 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:40:21.912443   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:21.912904   80401 api_server.go:269] stopped: https://192.168.72.216:8443/healthz: Get "https://192.168.72.216:8443/healthz": dial tcp 192.168.72.216:8443: connect: connection refused
	I0717 18:40:22.412598   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:20.397025   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting to get IP...
	I0717 18:40:20.398122   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.398525   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.398610   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.398506   81910 retry.go:31] will retry after 285.646022ms: waiting for machine to come up
	I0717 18:40:20.686556   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.687151   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.687263   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.687202   81910 retry.go:31] will retry after 239.996ms: waiting for machine to come up
	I0717 18:40:20.928604   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.929111   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.929139   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.929057   81910 retry.go:31] will retry after 487.674422ms: waiting for machine to come up
	I0717 18:40:21.418475   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.418928   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.418952   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:21.418872   81910 retry.go:31] will retry after 439.363216ms: waiting for machine to come up
	I0717 18:40:21.859546   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.860241   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.860273   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:21.860145   81910 retry.go:31] will retry after 598.922134ms: waiting for machine to come up
	I0717 18:40:22.461026   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:22.461509   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:22.461542   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:22.461457   81910 retry.go:31] will retry after 908.602286ms: waiting for machine to come up
	I0717 18:40:23.371582   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:23.372143   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:23.372170   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:23.372093   81910 retry.go:31] will retry after 893.690966ms: waiting for machine to come up
	I0717 18:40:24.267377   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:24.267908   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:24.267935   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:24.267873   81910 retry.go:31] will retry after 1.468061022s: waiting for machine to come up
	I0717 18:40:20.578679   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:20.581569   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.581933   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:20.581961   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.582197   80857 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:20.586047   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:20.598137   80857 kubeadm.go:883] updating cluster {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:20.598284   80857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:40:20.598355   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:20.646681   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:20.646757   80857 ssh_runner.go:195] Run: which lz4
	I0717 18:40:20.650691   80857 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:20.654703   80857 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:20.654730   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 18:40:22.163706   80857 crio.go:462] duration metric: took 1.513040695s to copy over tarball
	I0717 18:40:22.163783   80857 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:40:24.904256   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:24.904292   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:24.904308   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:24.971088   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:24.971120   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:24.971136   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.015832   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.015868   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:25.413309   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.418927   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.418955   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:25.913026   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.917375   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.917407   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:26.412566   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:26.419115   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:26.419140   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:26.912680   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:26.920245   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:26.920268   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:27.412854   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:27.417356   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:27.417390   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:27.912883   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:27.918242   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:27.918274   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:28.412591   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:28.419257   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:40:28.427814   80401 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:40:28.427842   80401 api_server.go:131] duration metric: took 6.515416451s to wait for apiserver health ...
	I0717 18:40:28.427854   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:40:28.427863   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:28.429828   80401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:40:28.431012   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:40:28.444822   80401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:40:28.465212   80401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:40:28.477639   80401 system_pods.go:59] 8 kube-system pods found
	I0717 18:40:28.477691   80401 system_pods.go:61] "coredns-5cfdc65f69-spj2w" [6849b651-9346-4d96-97a7-88eca7bbd50a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:40:28.477706   80401 system_pods.go:61] "etcd-no-preload-066175" [be012488-220b-421d-bf16-a3623fafb8fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:40:28.477721   80401 system_pods.go:61] "kube-apiserver-no-preload-066175" [4292a786-61f3-405d-8784-ec8a58e1b124] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:40:28.477731   80401 system_pods.go:61] "kube-controller-manager-no-preload-066175" [937a48f4-7fca-4cee-bb50-51f1720960da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:40:28.477739   80401 system_pods.go:61] "kube-proxy-tn5xn" [f0a910b3-98b6-470f-a5a2-e49369ecb733] Running
	I0717 18:40:28.477748   80401 system_pods.go:61] "kube-scheduler-no-preload-066175" [ffa2475c-7a5a-4988-89a2-4727e07356cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:40:28.477756   80401 system_pods.go:61] "metrics-server-78fcd8795b-mbtvd" [ccd7a565-52ef-49be-b659-31ae20af537a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:40:28.477761   80401 system_pods.go:61] "storage-provisioner" [19914ecc-2fcc-4cb8-bd78-fb6891dcf85d] Running
	I0717 18:40:28.477769   80401 system_pods.go:74] duration metric: took 12.536267ms to wait for pod list to return data ...
	I0717 18:40:28.477777   80401 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:40:28.482322   80401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:40:28.482348   80401 node_conditions.go:123] node cpu capacity is 2
	I0717 18:40:28.482368   80401 node_conditions.go:105] duration metric: took 4.585233ms to run NodePressure ...
	I0717 18:40:28.482387   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:28.768656   80401 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:40:28.773308   80401 kubeadm.go:739] kubelet initialised
	I0717 18:40:28.773330   80401 kubeadm.go:740] duration metric: took 4.654448ms waiting for restarted kubelet to initialise ...
	I0717 18:40:28.773338   80401 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:40:28.778778   80401 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:25.738071   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:25.738580   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:25.738611   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:25.738538   81910 retry.go:31] will retry after 1.505740804s: waiting for machine to come up
	I0717 18:40:27.246293   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:27.246651   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:27.246674   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:27.246606   81910 retry.go:31] will retry after 1.574253799s: waiting for machine to come up
	I0717 18:40:28.822159   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:28.822546   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:28.822597   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:28.822517   81910 retry.go:31] will retry after 2.132842884s: waiting for machine to come up
	I0717 18:40:25.307875   80857 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.144060111s)
	I0717 18:40:25.307903   80857 crio.go:469] duration metric: took 3.144169984s to extract the tarball
	I0717 18:40:25.307914   80857 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:40:25.354436   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:25.404799   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:25.404827   80857 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:40:25.404884   80857 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.404936   80857 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 18:40:25.404908   80857 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.404952   80857 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.404998   80857 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.405010   80857 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.406661   80857 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.406667   80857 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 18:40:25.406690   80857 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.407119   80857 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.619950   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 18:40:25.635075   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.641561   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.647362   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.648054   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.649684   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.664183   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.709163   80857 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 18:40:25.709227   80857 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 18:40:25.709275   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.760931   80857 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 18:40:25.760994   80857 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.761042   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.779324   80857 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 18:40:25.779378   80857 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.779429   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799052   80857 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 18:40:25.799097   80857 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.799106   80857 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 18:40:25.799131   80857 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 18:40:25.799190   80857 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.799233   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799136   80857 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.799148   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799298   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.806973   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 18:40:25.807041   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.807066   80857 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 18:40:25.807095   80857 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.807126   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.807137   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.807237   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.811025   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.811114   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.935792   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 18:40:25.935853   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 18:40:25.935863   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 18:40:25.935934   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.935973   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 18:40:25.935996   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 18:40:25.940351   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 18:40:25.970107   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 18:40:26.231894   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:26.372230   80857 cache_images.go:92] duration metric: took 967.383323ms to LoadCachedImages
	W0717 18:40:26.372327   80857 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0717 18:40:26.372346   80857 kubeadm.go:934] updating node { 192.168.39.128 8443 v1.20.0 crio true true} ...
	I0717 18:40:26.372517   80857 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-019549 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:26.372613   80857 ssh_runner.go:195] Run: crio config
	I0717 18:40:26.416155   80857 cni.go:84] Creating CNI manager for ""
	I0717 18:40:26.416181   80857 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:26.416196   80857 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:26.416229   80857 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-019549 NodeName:old-k8s-version-019549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 18:40:26.416526   80857 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-019549"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:26.416595   80857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 18:40:26.426941   80857 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:26.427006   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:26.437810   80857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 18:40:26.460046   80857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:40:26.482521   80857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 18:40:26.502536   80857 ssh_runner.go:195] Run: grep 192.168.39.128	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:26.506513   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:26.520895   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:26.648931   80857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:26.665278   80857 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549 for IP: 192.168.39.128
	I0717 18:40:26.665300   80857 certs.go:194] generating shared ca certs ...
	I0717 18:40:26.665329   80857 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:26.665508   80857 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:26.665561   80857 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:26.665574   80857 certs.go:256] generating profile certs ...
	I0717 18:40:26.665693   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/client.key
	I0717 18:40:26.665780   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key.9c9b0a7e
	I0717 18:40:26.665836   80857 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key
	I0717 18:40:26.665998   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:26.666049   80857 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:26.666063   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:26.666095   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:26.666128   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:26.666167   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:26.666225   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:26.667047   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:26.713984   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:26.742617   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:26.770441   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:26.795098   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 18:40:26.825038   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:26.861300   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:26.901664   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:40:26.926357   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:26.948986   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:26.973248   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:26.994642   80857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:27.010158   80857 ssh_runner.go:195] Run: openssl version
	I0717 18:40:27.015861   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:27.026221   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030496   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030567   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.035862   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:27.046312   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:27.057117   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061775   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061824   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.067535   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:27.079022   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:27.090009   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094688   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094768   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.100404   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:27.110653   80857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:27.115117   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:27.120633   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:27.126070   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:27.131500   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:27.137035   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:27.142426   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:27.147638   80857 kubeadm.go:392] StartCluster: {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:27.147756   80857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:27.147816   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.187433   80857 cri.go:89] found id: ""
	I0717 18:40:27.187498   80857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:27.197001   80857 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:27.197020   80857 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:27.197070   80857 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:27.206758   80857 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:27.207822   80857 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-019549" does not appear in /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:40:27.208505   80857 kubeconfig.go:62] /home/jenkins/minikube-integration/19283-14386/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-019549" cluster setting kubeconfig missing "old-k8s-version-019549" context setting]
	I0717 18:40:27.209497   80857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:27.212786   80857 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:27.222612   80857 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.128
	I0717 18:40:27.222649   80857 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:27.222663   80857 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:27.222721   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.268127   80857 cri.go:89] found id: ""
	I0717 18:40:27.268205   80857 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:27.284334   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:27.293669   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:27.293691   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:27.293743   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:40:27.305348   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:27.305437   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:27.317749   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:40:27.328481   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:27.328547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:27.337574   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.346242   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:27.346299   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.354946   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:40:27.363296   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:27.363350   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:27.371925   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:27.384020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:27.571539   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:28.767574   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.19599736s)
	I0717 18:40:28.767612   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.011512   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.151980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.258796   80857 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:29.258886   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:29.759072   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.787614   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:33.285208   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:30.956634   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:30.957109   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:30.957140   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:30.957059   81910 retry.go:31] will retry after 3.31337478s: waiting for machine to come up
	I0717 18:40:34.272528   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:34.273063   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:34.273094   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:34.273032   81910 retry.go:31] will retry after 3.207729964s: waiting for machine to come up
	I0717 18:40:30.259921   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.758948   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.258967   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.759872   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.259187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.759299   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.259080   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.759583   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.259740   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.759068   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.697183   80180 start.go:364] duration metric: took 48.129837953s to acquireMachinesLock for "embed-certs-527415"
	I0717 18:40:38.697248   80180 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:38.697260   80180 fix.go:54] fixHost starting: 
	I0717 18:40:38.697680   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:38.697712   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:38.713575   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0717 18:40:38.713926   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:38.714396   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:40:38.714422   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:38.714762   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:38.714949   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:38.715109   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:40:38.716552   80180 fix.go:112] recreateIfNeeded on embed-certs-527415: state=Stopped err=<nil>
	I0717 18:40:38.716574   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	W0717 18:40:38.716775   80180 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:38.718610   80180 out.go:177] * Restarting existing kvm2 VM for "embed-certs-527415" ...
	I0717 18:40:35.285888   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:36.285651   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:36.285676   80401 pod_ready.go:81] duration metric: took 7.506876819s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.285686   80401 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.292615   80401 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:36.292638   80401 pod_ready.go:81] duration metric: took 6.944487ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.292650   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:38.298338   80401 pod_ready.go:102] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:37.484312   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.484723   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has current primary IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.484740   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Found IP for machine: 192.168.50.245
	I0717 18:40:37.484753   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Reserving static IP address...
	I0717 18:40:37.485137   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-022930", mac: "52:54:00:5d:76:ae", ip: "192.168.50.245"} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.485161   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Reserved static IP address: 192.168.50.245
	I0717 18:40:37.485174   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | skip adding static IP to network mk-default-k8s-diff-port-022930 - found existing host DHCP lease matching {name: "default-k8s-diff-port-022930", mac: "52:54:00:5d:76:ae", ip: "192.168.50.245"}
	I0717 18:40:37.485191   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Getting to WaitForSSH function...
	I0717 18:40:37.485207   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for SSH to be available...
	I0717 18:40:37.487397   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.487767   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.487796   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.487899   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Using SSH client type: external
	I0717 18:40:37.487927   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa (-rw-------)
	I0717 18:40:37.487961   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:37.487973   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | About to run SSH command:
	I0717 18:40:37.487992   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | exit 0
	I0717 18:40:37.608746   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:37.609085   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetConfigRaw
	I0717 18:40:37.609739   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:37.612293   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.612668   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.612689   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.612936   81068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/config.json ...
	I0717 18:40:37.613176   81068 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:37.613194   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:37.613391   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.615483   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.615774   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.615804   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.615881   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.616038   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.616187   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.616306   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.616470   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.616676   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.616691   81068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:37.720971   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:37.721004   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.721307   81068 buildroot.go:166] provisioning hostname "default-k8s-diff-port-022930"
	I0717 18:40:37.721340   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.721654   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.724162   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.724507   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.724535   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.724712   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.724912   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.725090   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.725259   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.725430   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.725635   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.725651   81068 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-022930 && echo "default-k8s-diff-port-022930" | sudo tee /etc/hostname
	I0717 18:40:37.837366   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-022930
	
	I0717 18:40:37.837389   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.839920   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.840291   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.840325   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.840450   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.840654   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.840830   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.840970   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.841130   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.841344   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.841363   81068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-022930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-022930/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-022930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:37.948311   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:37.948343   81068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:37.948394   81068 buildroot.go:174] setting up certificates
	I0717 18:40:37.948406   81068 provision.go:84] configureAuth start
	I0717 18:40:37.948416   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.948732   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:37.951214   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.951548   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.951578   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.951693   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.953805   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.954086   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.954105   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.954250   81068 provision.go:143] copyHostCerts
	I0717 18:40:37.954318   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:37.954334   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:37.954401   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:37.954531   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:37.954542   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:37.954575   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:37.954657   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:37.954667   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:37.954694   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:37.954758   81068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-022930 san=[127.0.0.1 192.168.50.245 default-k8s-diff-port-022930 localhost minikube]
	I0717 18:40:38.054084   81068 provision.go:177] copyRemoteCerts
	I0717 18:40:38.054136   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:38.054160   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.056841   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.057265   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.057300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.057483   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.057683   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.057839   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.057982   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.138206   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:38.163105   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 18:40:38.188449   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:38.214829   81068 provision.go:87] duration metric: took 266.409028ms to configureAuth
	I0717 18:40:38.214853   81068 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:38.215005   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:40:38.215068   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.217684   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.218010   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.218037   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.218247   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.218419   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.218573   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.218706   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.218874   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:38.219021   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:38.219039   81068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:38.471162   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:38.471191   81068 machine.go:97] duration metric: took 858.000457ms to provisionDockerMachine
	I0717 18:40:38.471206   81068 start.go:293] postStartSetup for "default-k8s-diff-port-022930" (driver="kvm2")
	I0717 18:40:38.471220   81068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:38.471247   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.471558   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:38.471590   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.474241   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.474673   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.474704   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.474868   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.475085   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.475245   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.475524   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.554800   81068 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:38.558601   81068 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:38.558624   81068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:38.558685   81068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:38.558769   81068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:38.558875   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:38.567664   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:38.589713   81068 start.go:296] duration metric: took 118.491854ms for postStartSetup
	I0717 18:40:38.589754   81068 fix.go:56] duration metric: took 19.496049651s for fixHost
	I0717 18:40:38.589777   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.592433   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.592813   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.592860   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.592989   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.593188   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.593368   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.593536   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.593738   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:38.593937   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:38.593955   81068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:38.697050   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241638.669121206
	
	I0717 18:40:38.697075   81068 fix.go:216] guest clock: 1721241638.669121206
	I0717 18:40:38.697085   81068 fix.go:229] Guest: 2024-07-17 18:40:38.669121206 +0000 UTC Remote: 2024-07-17 18:40:38.589759024 +0000 UTC m=+204.149894792 (delta=79.362182ms)
	I0717 18:40:38.697108   81068 fix.go:200] guest clock delta is within tolerance: 79.362182ms
	I0717 18:40:38.697118   81068 start.go:83] releasing machines lock for "default-k8s-diff-port-022930", held for 19.603450588s
	I0717 18:40:38.697143   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.697381   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:38.700059   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.700504   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.700529   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.700764   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701246   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701541   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701619   81068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:38.701672   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.701777   81068 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:38.701797   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.704169   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704478   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.704503   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704657   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704684   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.704849   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.705002   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.705164   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.705262   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.705300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.705496   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.705663   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.705817   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.705967   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.825607   81068 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:38.831484   81068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:38.972775   81068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:38.978446   81068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:38.978502   81068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:38.999160   81068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:38.999180   81068 start.go:495] detecting cgroup driver to use...
	I0717 18:40:38.999234   81068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:39.016133   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:39.029031   81068 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:39.029083   81068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:39.042835   81068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:39.056981   81068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:39.168521   81068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:39.306630   81068 docker.go:233] disabling docker service ...
	I0717 18:40:39.306704   81068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:39.320435   81068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:39.337780   81068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:35.259643   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:35.759432   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.259818   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.759627   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.259968   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.758933   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.259980   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.759776   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.259988   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.496847   81068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:39.627783   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:39.641684   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:39.659183   81068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:40:39.659250   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.669034   81068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:39.669100   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.678708   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.688822   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.699484   81068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:39.709505   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.720715   81068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.736510   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.746991   81068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:39.757265   81068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:39.757320   81068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:39.774777   81068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:39.789593   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:39.907377   81068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:40.039498   81068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:40.039592   81068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:40.044502   81068 start.go:563] Will wait 60s for crictl version
	I0717 18:40:40.044558   81068 ssh_runner.go:195] Run: which crictl
	I0717 18:40:40.048708   81068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:40.087738   81068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:40.087822   81068 ssh_runner.go:195] Run: crio --version
	I0717 18:40:40.115460   81068 ssh_runner.go:195] Run: crio --version
	I0717 18:40:40.150181   81068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:40:38.719828   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Start
	I0717 18:40:38.720004   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring networks are active...
	I0717 18:40:38.720983   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring network default is active
	I0717 18:40:38.721537   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring network mk-embed-certs-527415 is active
	I0717 18:40:38.721945   80180 main.go:141] libmachine: (embed-certs-527415) Getting domain xml...
	I0717 18:40:38.722654   80180 main.go:141] libmachine: (embed-certs-527415) Creating domain...
	I0717 18:40:40.007036   80180 main.go:141] libmachine: (embed-certs-527415) Waiting to get IP...
	I0717 18:40:40.007975   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.008511   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.008608   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.008495   82069 retry.go:31] will retry after 268.334211ms: waiting for machine to come up
	I0717 18:40:40.278129   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.278639   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.278670   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.278585   82069 retry.go:31] will retry after 350.00147ms: waiting for machine to come up
	I0717 18:40:40.630229   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.630819   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.630853   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.630768   82069 retry.go:31] will retry after 411.079615ms: waiting for machine to come up
	I0717 18:40:41.043232   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.043851   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.043880   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.043822   82069 retry.go:31] will retry after 387.726284ms: waiting for machine to come up
	I0717 18:40:41.433536   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.434058   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.434092   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.434005   82069 retry.go:31] will retry after 538.564385ms: waiting for machine to come up
	I0717 18:40:41.973917   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.974457   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.974489   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.974395   82069 retry.go:31] will retry after 778.576616ms: waiting for machine to come up
	I0717 18:40:42.754322   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:42.754872   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:42.754899   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:42.754837   82069 retry.go:31] will retry after 758.957234ms: waiting for machine to come up
	I0717 18:40:40.299673   80401 pod_ready.go:102] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:40.801297   80401 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.801325   80401 pod_ready.go:81] duration metric: took 4.508666316s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.801339   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.807354   80401 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.807372   80401 pod_ready.go:81] duration metric: took 6.024916ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.807380   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.812934   80401 pod_ready.go:92] pod "kube-proxy-tn5xn" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.812982   80401 pod_ready.go:81] duration metric: took 5.594378ms for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.812996   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.817940   80401 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.817969   80401 pod_ready.go:81] duration metric: took 4.96427ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.817982   80401 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:42.825018   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:40.151220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:40.153791   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:40.154220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:40.154246   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:40.154472   81068 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:40.159310   81068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:40.172121   81068 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:40.172256   81068 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:40:40.172307   81068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:40.215863   81068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:40:40.215940   81068 ssh_runner.go:195] Run: which lz4
	I0717 18:40:40.220502   81068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:40.224682   81068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:40.224714   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:40:41.511505   81068 crio.go:462] duration metric: took 1.291039238s to copy over tarball
	I0717 18:40:41.511574   81068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:40:43.730839   81068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.219230444s)
	I0717 18:40:43.730901   81068 crio.go:469] duration metric: took 2.219370372s to extract the tarball
	I0717 18:40:43.730912   81068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:40:43.767876   81068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:43.809466   81068 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:40:43.809494   81068 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:40:43.809505   81068 kubeadm.go:934] updating node { 192.168.50.245 8444 v1.30.2 crio true true} ...
	I0717 18:40:43.809646   81068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-022930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:43.809740   81068 ssh_runner.go:195] Run: crio config
	I0717 18:40:43.850614   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:40:43.850635   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:43.850648   81068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:43.850669   81068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-022930 NodeName:default-k8s-diff-port-022930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:40:43.850795   81068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-022930"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:43.850851   81068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:40:43.862674   81068 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:43.862733   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:43.873304   81068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 18:40:43.888884   81068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:40:43.903631   81068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 18:40:43.918768   81068 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:43.922033   81068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:43.932546   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:44.049621   81068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:44.065718   81068 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930 for IP: 192.168.50.245
	I0717 18:40:44.065747   81068 certs.go:194] generating shared ca certs ...
	I0717 18:40:44.065767   81068 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:44.065939   81068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:44.065999   81068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:44.066016   81068 certs.go:256] generating profile certs ...
	I0717 18:40:44.066149   81068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/client.key
	I0717 18:40:44.066224   81068 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.key.8aa7f0a0
	I0717 18:40:44.066284   81068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.key
	I0717 18:40:44.066445   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:44.066494   81068 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:44.066507   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:44.066548   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:44.066579   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:44.066606   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:44.066650   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:44.067421   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:44.104160   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:44.133716   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:44.161170   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:44.190489   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 18:40:44.211792   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:44.232875   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:44.255059   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:40:44.276826   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:44.298357   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:44.320634   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:44.345428   81068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:44.362934   81068 ssh_runner.go:195] Run: openssl version
	I0717 18:40:44.369764   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:44.382557   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.386445   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.386483   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.392033   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:44.401987   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:44.411437   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.415367   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.415419   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.420523   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:44.429915   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:44.439371   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.443248   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.443301   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.448380   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:44.457828   81068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:44.462151   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:44.467474   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:44.472829   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:40.259910   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:40.759917   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.259718   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.759839   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.259129   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.759772   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.259989   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.759724   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.258978   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.759594   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.515097   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:43.515595   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:43.515616   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:43.515539   82069 retry.go:31] will retry after 1.173590835s: waiting for machine to come up
	I0717 18:40:44.691027   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:44.691479   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:44.691520   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:44.691428   82069 retry.go:31] will retry after 1.594704966s: waiting for machine to come up
	I0717 18:40:46.288022   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:46.288609   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:46.288642   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:46.288549   82069 retry.go:31] will retry after 2.014912325s: waiting for machine to come up
	I0717 18:40:45.323815   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:47.324715   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:44.478397   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:44.483860   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:44.489029   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:44.494220   81068 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:44.494329   81068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:44.494381   81068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:44.534380   81068 cri.go:89] found id: ""
	I0717 18:40:44.534445   81068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:44.545270   81068 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:44.545287   81068 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:44.545328   81068 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:44.555521   81068 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:44.556584   81068 kubeconfig.go:125] found "default-k8s-diff-port-022930" server: "https://192.168.50.245:8444"
	I0717 18:40:44.558675   81068 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:44.567696   81068 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.245
	I0717 18:40:44.567727   81068 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:44.567739   81068 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:44.567787   81068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:44.605757   81068 cri.go:89] found id: ""
	I0717 18:40:44.605833   81068 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:44.622187   81068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:44.631169   81068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:44.631191   81068 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:44.631241   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 18:40:44.639194   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:44.639248   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:44.647542   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 18:40:44.655622   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:44.655708   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:44.663923   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 18:40:44.671733   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:44.671778   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:44.680375   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 18:40:44.688043   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:44.688085   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:44.697020   81068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:44.705554   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:44.812051   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.351683   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.559471   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.618086   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.678836   81068 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:45.678926   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.179998   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.679083   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.179084   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.679042   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.179150   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.195192   81068 api_server.go:72] duration metric: took 2.516354411s to wait for apiserver process to appear ...
	I0717 18:40:48.195222   81068 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:40:48.195247   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:45.259185   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:45.759765   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.259009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.759131   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.259477   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.759386   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.259977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.759374   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.259744   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.759440   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.393650   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:50.393688   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:50.393705   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:50.467974   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:50.468000   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:50.696340   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:50.702264   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:50.702308   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:51.195503   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:51.200034   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:51.200060   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:51.695594   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:51.699593   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 200:
	ok
	I0717 18:40:51.706025   81068 api_server.go:141] control plane version: v1.30.2
	I0717 18:40:51.706048   81068 api_server.go:131] duration metric: took 3.510818337s to wait for apiserver health ...
	I0717 18:40:51.706059   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:40:51.706067   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:51.707696   81068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:40:48.305798   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:48.306290   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:48.306323   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:48.306232   82069 retry.go:31] will retry after 1.789943402s: waiting for machine to come up
	I0717 18:40:50.098279   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:50.098771   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:50.098798   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:50.098734   82069 retry.go:31] will retry after 2.765766483s: waiting for machine to come up
	I0717 18:40:52.867667   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:52.868191   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:52.868212   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:52.868139   82069 retry.go:31] will retry after 2.762670644s: waiting for machine to come up
	I0717 18:40:49.325415   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:51.824015   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:53.824980   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:51.708887   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:40:51.718704   81068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:40:51.735711   81068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:40:51.745976   81068 system_pods.go:59] 8 kube-system pods found
	I0717 18:40:51.746009   81068 system_pods.go:61] "coredns-7db6d8ff4d-czk4x" [80cedf0b-248a-458e-994c-81f852d78076] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:40:51.746022   81068 system_pods.go:61] "etcd-default-k8s-diff-port-022930" [f9cf97bf-5fdc-4623-a78c-d29e0352ce40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:40:51.746036   81068 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-022930" [599cef4d-2b4d-4cd5-9552-99de585759eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:40:51.746051   81068 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-022930" [89092470-6fc9-47b2-b680-7c93945d9005] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:40:51.746062   81068 system_pods.go:61] "kube-proxy-hj7ss" [d260f18e-7a01-4f07-8c6a-87e8f6329f79] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 18:40:51.746074   81068 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-022930" [fe098478-fcb6-4084-b773-11c2cbb995aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:40:51.746083   81068 system_pods.go:61] "metrics-server-569cc877fc-j9qhx" [18efb008-e7d3-435e-9156-57c16b454d07] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:40:51.746093   81068 system_pods.go:61] "storage-provisioner" [ac856758-62ca-485f-aa31-5cd1c7d1dbe5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:40:51.746103   81068 system_pods.go:74] duration metric: took 10.373616ms to wait for pod list to return data ...
	I0717 18:40:51.746115   81068 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:40:51.749151   81068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:40:51.749173   81068 node_conditions.go:123] node cpu capacity is 2
	I0717 18:40:51.749185   81068 node_conditions.go:105] duration metric: took 3.061813ms to run NodePressure ...
	I0717 18:40:51.749204   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:52.049486   81068 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:40:52.053636   81068 kubeadm.go:739] kubelet initialised
	I0717 18:40:52.053656   81068 kubeadm.go:740] duration metric: took 4.136528ms waiting for restarted kubelet to initialise ...
	I0717 18:40:52.053665   81068 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:40:52.058401   81068 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.062406   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.062429   81068 pod_ready.go:81] duration metric: took 4.007504ms for pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.062439   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.062454   81068 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.066161   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.066185   81068 pod_ready.go:81] duration metric: took 3.717781ms for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.066202   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.066212   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.070043   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.070064   81068 pod_ready.go:81] duration metric: took 3.840533ms for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.070074   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.070080   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:54.077110   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:50.258977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.259867   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.759826   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.259016   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.759708   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.259589   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.759788   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.259753   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.759841   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.633531   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.633999   80180 main.go:141] libmachine: (embed-certs-527415) Found IP for machine: 192.168.61.90
	I0717 18:40:55.634014   80180 main.go:141] libmachine: (embed-certs-527415) Reserving static IP address...
	I0717 18:40:55.634026   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has current primary IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.634407   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.634438   80180 main.go:141] libmachine: (embed-certs-527415) Reserved static IP address: 192.168.61.90
	I0717 18:40:55.634456   80180 main.go:141] libmachine: (embed-certs-527415) DBG | skip adding static IP to network mk-embed-certs-527415 - found existing host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"}
	I0717 18:40:55.634476   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Getting to WaitForSSH function...
	I0717 18:40:55.634490   80180 main.go:141] libmachine: (embed-certs-527415) Waiting for SSH to be available...
	I0717 18:40:55.636604   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.636877   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.636904   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.637010   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH client type: external
	I0717 18:40:55.637032   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa (-rw-------)
	I0717 18:40:55.637063   80180 main.go:141] libmachine: (embed-certs-527415) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:55.637082   80180 main.go:141] libmachine: (embed-certs-527415) DBG | About to run SSH command:
	I0717 18:40:55.637094   80180 main.go:141] libmachine: (embed-certs-527415) DBG | exit 0
	I0717 18:40:55.765208   80180 main.go:141] libmachine: (embed-certs-527415) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:55.765554   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetConfigRaw
	I0717 18:40:55.766322   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:55.769331   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.769800   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.769827   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.770203   80180 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/config.json ...
	I0717 18:40:55.770593   80180 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:55.770620   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:55.770826   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:55.773837   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.774313   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.774346   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.774553   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:55.774750   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.774909   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.775060   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:55.775277   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:55.775534   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:55.775556   80180 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:55.888982   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:55.889013   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:55.889259   80180 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:40:55.889286   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:55.889501   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:55.891900   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.892284   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.892302   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.892532   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:55.892701   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.892853   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.892993   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:55.893136   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:55.893293   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:55.893310   80180 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-527415 && echo "embed-certs-527415" | sudo tee /etc/hostname
	I0717 18:40:56.018869   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-527415
	
	I0717 18:40:56.018898   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.021591   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.021888   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.021909   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.022286   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.022489   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.022646   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.022765   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.022905   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.023050   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.023066   80180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-527415' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-527415/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-527415' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:56.146411   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:56.146455   80180 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:56.146478   80180 buildroot.go:174] setting up certificates
	I0717 18:40:56.146490   80180 provision.go:84] configureAuth start
	I0717 18:40:56.146502   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:56.146767   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:56.149369   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.149725   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.149755   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.149937   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.152431   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.152753   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.152774   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.152936   80180 provision.go:143] copyHostCerts
	I0717 18:40:56.153028   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:56.153041   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:56.153096   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:56.153186   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:56.153194   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:56.153214   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:56.153277   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:56.153283   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:56.153300   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:56.153349   80180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.embed-certs-527415 san=[127.0.0.1 192.168.61.90 embed-certs-527415 localhost minikube]
	I0717 18:40:56.326978   80180 provision.go:177] copyRemoteCerts
	I0717 18:40:56.327024   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:56.327045   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.329432   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.329778   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.329809   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.329927   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.330121   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.330295   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.330409   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.415173   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:56.438501   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 18:40:56.460520   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:56.481808   80180 provision.go:87] duration metric: took 335.305142ms to configureAuth
	I0717 18:40:56.481832   80180 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:56.482001   80180 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:40:56.482063   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.484653   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.485044   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.485074   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.485222   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.485468   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.485652   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.485810   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.485953   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.486108   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.486123   80180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:56.741135   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:56.741185   80180 machine.go:97] duration metric: took 970.573336ms to provisionDockerMachine
	I0717 18:40:56.741204   80180 start.go:293] postStartSetup for "embed-certs-527415" (driver="kvm2")
	I0717 18:40:56.741221   80180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:56.741245   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.741597   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:56.741625   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.744356   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.744805   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.744831   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.745025   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.745224   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.745382   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.745549   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.835435   80180 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:56.839724   80180 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:56.839753   80180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:56.839834   80180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:56.839945   80180 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:56.840083   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:56.849582   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:56.872278   80180 start.go:296] duration metric: took 131.057656ms for postStartSetup
	I0717 18:40:56.872347   80180 fix.go:56] duration metric: took 18.175085798s for fixHost
	I0717 18:40:56.872375   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.874969   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.875308   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.875340   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.875533   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.875722   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.875955   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.876089   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.876274   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.876459   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.876469   80180 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:56.985888   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241656.959508652
	
	I0717 18:40:56.985907   80180 fix.go:216] guest clock: 1721241656.959508652
	I0717 18:40:56.985914   80180 fix.go:229] Guest: 2024-07-17 18:40:56.959508652 +0000 UTC Remote: 2024-07-17 18:40:56.872354453 +0000 UTC m=+348.896679896 (delta=87.154199ms)
	I0717 18:40:56.985939   80180 fix.go:200] guest clock delta is within tolerance: 87.154199ms
	I0717 18:40:56.985944   80180 start.go:83] releasing machines lock for "embed-certs-527415", held for 18.288718042s
	I0717 18:40:56.985964   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.986210   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:56.988716   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.989086   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.989114   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.989279   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.989786   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.989966   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.990055   80180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:56.990092   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.990360   80180 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:56.990390   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.992519   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992816   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.992835   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992852   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992984   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.993162   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.993212   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.993234   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.993356   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.993401   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.993499   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.993541   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.993754   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.993915   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:57.116598   80180 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:57.122546   80180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:57.268379   80180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:57.274748   80180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:57.274819   80180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:57.290374   80180 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:57.290394   80180 start.go:495] detecting cgroup driver to use...
	I0717 18:40:57.290443   80180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:57.307521   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:57.323478   80180 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:57.323554   80180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:57.337078   80180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:57.350181   80180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:57.463512   80180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:57.626650   80180 docker.go:233] disabling docker service ...
	I0717 18:40:57.626714   80180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:57.641067   80180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:57.655085   80180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:57.802789   80180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:57.919140   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:57.932620   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:57.949471   80180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:40:57.949528   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.960297   80180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:57.960366   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.970890   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.980768   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.990723   80180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:58.000791   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.010332   80180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.026611   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.036106   80180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:58.044742   80180 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:58.044791   80180 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:58.056584   80180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:58.065470   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:58.182119   80180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:58.319330   80180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:58.319400   80180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:58.326361   80180 start.go:563] Will wait 60s for crictl version
	I0717 18:40:58.326405   80180 ssh_runner.go:195] Run: which crictl
	I0717 18:40:58.329951   80180 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:58.366561   80180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:58.366668   80180 ssh_runner.go:195] Run: crio --version
	I0717 18:40:58.398483   80180 ssh_runner.go:195] Run: crio --version
	I0717 18:40:58.427421   80180 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:40:56.324834   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:58.325283   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:56.077315   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:58.077815   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:55.259450   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.759932   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.259395   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.759855   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.259739   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.759436   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.258951   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.759931   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.259588   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.759651   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.428872   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:58.431182   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:58.431554   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:58.431580   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:58.431756   80180 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:58.435914   80180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:58.448777   80180 kubeadm.go:883] updating cluster {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:58.448923   80180 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:40:58.449018   80180 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:58.488011   80180 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:40:58.488077   80180 ssh_runner.go:195] Run: which lz4
	I0717 18:40:58.491828   80180 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:58.495609   80180 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:58.495640   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:40:59.686445   80180 crio.go:462] duration metric: took 1.194619366s to copy over tarball
	I0717 18:40:59.686513   80180 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:41:01.862679   80180 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.176132338s)
	I0717 18:41:01.862710   80180 crio.go:469] duration metric: took 2.176236509s to extract the tarball
	I0717 18:41:01.862719   80180 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:41:01.901813   80180 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:41:01.945403   80180 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:41:01.945429   80180 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:41:01.945438   80180 kubeadm.go:934] updating node { 192.168.61.90 8443 v1.30.2 crio true true} ...
	I0717 18:41:01.945554   80180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-527415 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:41:01.945631   80180 ssh_runner.go:195] Run: crio config
	I0717 18:41:01.991102   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:41:01.991130   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:41:01.991144   80180 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:41:01.991168   80180 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.90 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-527415 NodeName:embed-certs-527415 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:41:01.991331   80180 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-527415"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:41:01.991397   80180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:41:02.001007   80180 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:41:02.001082   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:41:02.010130   80180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0717 18:41:02.025405   80180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:41:02.041167   80180 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0717 18:41:02.057441   80180 ssh_runner.go:195] Run: grep 192.168.61.90	control-plane.minikube.internal$ /etc/hosts
	I0717 18:41:02.060878   80180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:41:02.072984   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:41:02.188194   80180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:41:02.204599   80180 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415 for IP: 192.168.61.90
	I0717 18:41:02.204623   80180 certs.go:194] generating shared ca certs ...
	I0717 18:41:02.204643   80180 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:41:02.204822   80180 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:41:02.204885   80180 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:41:02.204899   80180 certs.go:256] generating profile certs ...
	I0717 18:41:02.205047   80180 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.key
	I0717 18:41:02.205129   80180 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9
	I0717 18:41:02.205188   80180 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key
	I0717 18:41:02.205372   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:41:02.205436   80180 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:41:02.205451   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:41:02.205486   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:41:02.205526   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:41:02.205556   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:41:02.205612   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:41:02.206441   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:41:02.234135   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:41:02.259780   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:41:02.285464   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:41:02.316267   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 18:41:02.348835   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:41:02.375505   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:41:02.402683   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:41:02.426689   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:41:02.449328   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:41:02.472140   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:41:02.494016   80180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:41:02.512612   80180 ssh_runner.go:195] Run: openssl version
	I0717 18:41:02.519908   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:41:02.532706   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.538136   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.538191   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.545493   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:41:02.558832   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:41:02.570455   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.575515   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.575582   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.581428   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:41:02.592439   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:41:02.602823   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.608370   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.608433   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.615367   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:41:02.628355   80180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:41:02.632772   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:41:02.638325   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:41:02.643635   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:41:02.648960   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:41:02.654088   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:41:02.659220   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:41:02.664325   80180 kubeadm.go:392] StartCluster: {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:41:02.664444   80180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:41:02.664495   80180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:41:02.699590   80180 cri.go:89] found id: ""
	I0717 18:41:02.699676   80180 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:41:02.709427   80180 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:41:02.709452   80180 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:41:02.709503   80180 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:41:02.718489   80180 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:41:02.719505   80180 kubeconfig.go:125] found "embed-certs-527415" server: "https://192.168.61.90:8443"
	I0717 18:41:02.721457   80180 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:41:02.730258   80180 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.90
	I0717 18:41:02.730288   80180 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:41:02.730301   80180 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:41:02.730367   80180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:41:02.768268   80180 cri.go:89] found id: ""
	I0717 18:41:02.768339   80180 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:41:02.786699   80180 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:41:02.796888   80180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:41:02.796912   80180 kubeadm.go:157] found existing configuration files:
	
	I0717 18:41:02.796965   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:41:02.805633   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:41:02.805703   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:41:02.817624   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:41:02.827840   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:41:02.827902   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:41:02.836207   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:41:02.844201   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:41:02.844265   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:41:02.852667   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:41:02.860697   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:41:02.860741   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:41:02.869133   80180 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:41:02.877992   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:02.986350   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:00.823447   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:02.825375   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:00.578095   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:02.576899   81068 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.576927   81068 pod_ready.go:81] duration metric: took 10.506835962s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.576953   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hj7ss" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.584912   81068 pod_ready.go:92] pod "kube-proxy-hj7ss" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.584933   81068 pod_ready.go:81] duration metric: took 7.972079ms for pod "kube-proxy-hj7ss" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.584964   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.590342   81068 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.590366   81068 pod_ready.go:81] duration metric: took 5.392364ms for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.590380   81068 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:00.259461   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:00.759148   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.259596   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.759943   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.259670   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.759900   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.259745   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.759843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.259902   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.759850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.874112   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.091026   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.170734   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.292719   80180 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:41:04.292826   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.793710   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.292924   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.792872   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.293626   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.793632   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.810658   80180 api_server.go:72] duration metric: took 2.517938682s to wait for apiserver process to appear ...
	I0717 18:41:06.810685   80180 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:41:06.810705   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:05.323684   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:07.324653   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:04.596794   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:06.597411   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:09.097409   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:05.259624   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.759258   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.259346   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.759041   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.259467   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.759164   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.259047   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.759959   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.259372   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.759259   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.612683   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:41:09.612715   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:41:09.612728   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:09.633949   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:41:09.633975   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:41:09.811272   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:09.815690   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:09.815720   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:10.311256   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:10.319587   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:10.319620   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:10.811133   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:10.815819   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:10.815862   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:11.311037   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:11.315892   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:11.315923   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:11.811534   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:11.816601   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:11.816631   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:12.311178   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:12.315484   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:12.315510   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:12.811068   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:12.821016   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:12.821048   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:13.311166   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:13.315879   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:41:13.322661   80180 api_server.go:141] control plane version: v1.30.2
	I0717 18:41:13.322700   80180 api_server.go:131] duration metric: took 6.512007091s to wait for apiserver health ...
	I0717 18:41:13.322713   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:41:13.322722   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:41:13.324516   80180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:41:09.325535   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:11.325697   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:13.327238   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:11.597479   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:14.098908   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:10.259845   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:10.759671   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.259895   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.759877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.259003   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.759685   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.759844   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.259541   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.759709   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.325935   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:41:13.337601   80180 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:41:13.354366   80180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:41:13.364678   80180 system_pods.go:59] 8 kube-system pods found
	I0717 18:41:13.364715   80180 system_pods.go:61] "coredns-7db6d8ff4d-2fnlb" [86d50e9b-fb88-4332-90c5-a969b0654635] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:41:13.364726   80180 system_pods.go:61] "etcd-embed-certs-527415" [9d8ac0a8-4639-48d8-8ac4-88b0bd1e2082] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:41:13.364735   80180 system_pods.go:61] "kube-apiserver-embed-certs-527415" [7f72c4f9-f1db-4ac6-83e1-2b94245107c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:41:13.364743   80180 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [96081a97-2a90-4fec-84cb-9a399a43aeb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:41:13.364752   80180 system_pods.go:61] "kube-proxy-jltfs" [27f6259e-80cc-4881-bb06-6a2ad529179c] Running
	I0717 18:41:13.364763   80180 system_pods.go:61] "kube-scheduler-embed-certs-527415" [bed7b515-7ab0-460c-a13f-037f29576f30] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:41:13.364775   80180 system_pods.go:61] "metrics-server-569cc877fc-8md44" [1b9d50c8-6ca0-41c3-92d9-eebdccbf1a82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:41:13.364783   80180 system_pods.go:61] "storage-provisioner" [ccb34b69-d28d-477e-8c7a-0acdc547bec7] Running
	I0717 18:41:13.364791   80180 system_pods.go:74] duration metric: took 10.40947ms to wait for pod list to return data ...
	I0717 18:41:13.364803   80180 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:41:13.367687   80180 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:41:13.367712   80180 node_conditions.go:123] node cpu capacity is 2
	I0717 18:41:13.367725   80180 node_conditions.go:105] duration metric: took 2.912986ms to run NodePressure ...
	I0717 18:41:13.367745   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:13.630827   80180 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:41:13.636658   80180 kubeadm.go:739] kubelet initialised
	I0717 18:41:13.636688   80180 kubeadm.go:740] duration metric: took 5.830484ms waiting for restarted kubelet to initialise ...
	I0717 18:41:13.636699   80180 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:41:13.642171   80180 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.650539   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.650573   80180 pod_ready.go:81] duration metric: took 8.374432ms for pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.650585   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.650599   80180 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.655470   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "etcd-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.655500   80180 pod_ready.go:81] duration metric: took 4.8911ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.655512   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "etcd-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.655520   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.662448   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.662479   80180 pod_ready.go:81] duration metric: took 6.949002ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.662490   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.662499   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.757454   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.757485   80180 pod_ready.go:81] duration metric: took 94.976348ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.757494   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.757501   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:14.157339   80180 pod_ready.go:92] pod "kube-proxy-jltfs" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:14.157363   80180 pod_ready.go:81] duration metric: took 399.852649ms for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:14.157381   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:16.163623   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:15.825045   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:18.323440   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:16.596320   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:18.596807   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:15.259558   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:15.759585   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.259850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.760009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.259385   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.759208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.259218   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.759779   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.259666   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.759781   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.174371   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:20.664423   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:22.663932   80180 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:22.663955   80180 pod_ready.go:81] duration metric: took 8.506565077s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:22.663969   80180 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:20.324547   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:22.824318   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:21.096071   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:23.596775   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:20.259286   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:20.759048   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.259801   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.759595   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.259582   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.759871   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.259349   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.759659   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.259964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.759899   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.671105   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:27.170247   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:24.825017   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:26.825067   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:26.096196   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:28.097501   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:25.259559   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:25.759773   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.759924   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.259509   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.759986   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.259792   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.759564   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:29.259060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:29.259143   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:29.298974   80857 cri.go:89] found id: ""
	I0717 18:41:29.299006   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.299016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:29.299024   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:29.299087   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:29.333764   80857 cri.go:89] found id: ""
	I0717 18:41:29.333786   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.333793   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:29.333801   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:29.333849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:29.369639   80857 cri.go:89] found id: ""
	I0717 18:41:29.369674   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.369688   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:29.369697   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:29.369762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:29.403453   80857 cri.go:89] found id: ""
	I0717 18:41:29.403481   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.403489   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:29.403498   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:29.403555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:29.436662   80857 cri.go:89] found id: ""
	I0717 18:41:29.436687   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.436695   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:29.436701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:29.436749   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:29.471013   80857 cri.go:89] found id: ""
	I0717 18:41:29.471053   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.471064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:29.471074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:29.471139   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:29.502754   80857 cri.go:89] found id: ""
	I0717 18:41:29.502780   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.502787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:29.502793   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:29.502842   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:29.534205   80857 cri.go:89] found id: ""
	I0717 18:41:29.534232   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.534239   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:29.534247   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:29.534259   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:29.585406   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:29.585438   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:29.600629   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:29.600660   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:29.719788   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:29.719807   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:29.719819   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:29.785626   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:29.785662   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:29.669918   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:31.670544   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:29.325013   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:31.828532   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:30.097685   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:32.596760   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:32.325522   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:32.338046   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:32.338120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:32.370073   80857 cri.go:89] found id: ""
	I0717 18:41:32.370099   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.370106   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:32.370112   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:32.370165   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:32.408764   80857 cri.go:89] found id: ""
	I0717 18:41:32.408789   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.408799   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:32.408806   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:32.408862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:32.449078   80857 cri.go:89] found id: ""
	I0717 18:41:32.449108   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.449118   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:32.449125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:32.449176   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:32.481990   80857 cri.go:89] found id: ""
	I0717 18:41:32.482015   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.482022   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:32.482028   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:32.482077   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:32.521902   80857 cri.go:89] found id: ""
	I0717 18:41:32.521932   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.521942   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:32.521949   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:32.521997   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:32.554148   80857 cri.go:89] found id: ""
	I0717 18:41:32.554177   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.554206   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:32.554216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:32.554270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:32.587342   80857 cri.go:89] found id: ""
	I0717 18:41:32.587366   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.587374   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:32.587379   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:32.587425   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:32.619227   80857 cri.go:89] found id: ""
	I0717 18:41:32.619259   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.619270   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:32.619281   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:32.619296   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:32.669085   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:32.669124   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:32.682464   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:32.682500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:32.749218   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:32.749234   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:32.749245   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:32.814510   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:32.814545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:33.670578   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.670952   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:37.671373   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:34.324458   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:36.823615   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:38.825194   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.096041   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:37.096436   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:39.096906   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.362866   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:35.375563   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:35.375643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:35.412355   80857 cri.go:89] found id: ""
	I0717 18:41:35.412380   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.412388   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:35.412393   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:35.412439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:35.446596   80857 cri.go:89] found id: ""
	I0717 18:41:35.446621   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.446629   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:35.446634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:35.446691   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:35.481695   80857 cri.go:89] found id: ""
	I0717 18:41:35.481717   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.481725   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:35.481730   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:35.481783   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:35.514528   80857 cri.go:89] found id: ""
	I0717 18:41:35.514573   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.514584   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:35.514592   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:35.514657   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:35.547831   80857 cri.go:89] found id: ""
	I0717 18:41:35.547858   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.547871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:35.547879   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:35.547941   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:35.579059   80857 cri.go:89] found id: ""
	I0717 18:41:35.579084   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.579097   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:35.579104   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:35.579164   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:35.616442   80857 cri.go:89] found id: ""
	I0717 18:41:35.616480   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.616487   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:35.616492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:35.616545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:35.647535   80857 cri.go:89] found id: ""
	I0717 18:41:35.647564   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.647571   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:35.647579   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:35.647595   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:35.696664   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:35.696692   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:35.710474   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:35.710499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:35.785569   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:35.785595   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:35.785611   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:35.865750   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:35.865785   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:38.405391   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:38.417737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:38.417806   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:38.453848   80857 cri.go:89] found id: ""
	I0717 18:41:38.453877   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.453888   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:38.453895   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:38.453949   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:38.487083   80857 cri.go:89] found id: ""
	I0717 18:41:38.487112   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.487122   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:38.487129   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:38.487190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:38.517700   80857 cri.go:89] found id: ""
	I0717 18:41:38.517729   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.517738   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:38.517746   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:38.517808   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:38.547587   80857 cri.go:89] found id: ""
	I0717 18:41:38.547616   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.547625   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:38.547632   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:38.547780   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:38.581511   80857 cri.go:89] found id: ""
	I0717 18:41:38.581535   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.581542   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:38.581548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:38.581675   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:38.618308   80857 cri.go:89] found id: ""
	I0717 18:41:38.618327   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.618334   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:38.618340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:38.618401   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:38.658237   80857 cri.go:89] found id: ""
	I0717 18:41:38.658267   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.658278   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:38.658298   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:38.658359   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:38.694044   80857 cri.go:89] found id: ""
	I0717 18:41:38.694071   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.694080   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:38.694090   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:38.694106   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:38.746621   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:38.746658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:38.758781   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:38.758805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:38.827327   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:38.827345   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:38.827357   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:38.899731   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:38.899762   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:40.170106   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:42.170391   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:40.825940   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:43.327489   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:41.097668   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:43.597625   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:41.437479   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:41.451264   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:41.451336   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:41.489053   80857 cri.go:89] found id: ""
	I0717 18:41:41.489083   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.489093   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:41.489101   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:41.489162   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:41.521954   80857 cri.go:89] found id: ""
	I0717 18:41:41.521985   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.521996   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:41.522003   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:41.522068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:41.556847   80857 cri.go:89] found id: ""
	I0717 18:41:41.556875   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.556884   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:41.556893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:41.557024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:41.591232   80857 cri.go:89] found id: ""
	I0717 18:41:41.591255   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.591263   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:41.591269   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:41.591315   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:41.624533   80857 cri.go:89] found id: ""
	I0717 18:41:41.624565   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.624576   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:41.624583   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:41.624644   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:41.656033   80857 cri.go:89] found id: ""
	I0717 18:41:41.656063   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.656073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:41.656080   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:41.656140   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:41.691686   80857 cri.go:89] found id: ""
	I0717 18:41:41.691715   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.691725   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:41.691732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:41.691789   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:41.724688   80857 cri.go:89] found id: ""
	I0717 18:41:41.724718   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.724729   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:41.724741   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:41.724760   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:41.802855   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:41.802882   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:41.839242   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:41.839271   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:41.889028   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:41.889058   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:41.901598   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:41.901627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:41.972632   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.472824   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:44.487673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:44.487745   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:44.530173   80857 cri.go:89] found id: ""
	I0717 18:41:44.530204   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.530216   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:44.530224   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:44.530288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:44.577865   80857 cri.go:89] found id: ""
	I0717 18:41:44.577891   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.577899   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:44.577905   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:44.577967   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:44.621528   80857 cri.go:89] found id: ""
	I0717 18:41:44.621551   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.621559   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:44.621564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:44.621622   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:44.655456   80857 cri.go:89] found id: ""
	I0717 18:41:44.655488   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.655498   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:44.655505   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:44.655570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:44.688729   80857 cri.go:89] found id: ""
	I0717 18:41:44.688757   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.688767   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:44.688774   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:44.688832   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:44.720190   80857 cri.go:89] found id: ""
	I0717 18:41:44.720220   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.720231   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:44.720238   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:44.720294   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:44.750109   80857 cri.go:89] found id: ""
	I0717 18:41:44.750135   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.750142   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:44.750147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:44.750203   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:44.780039   80857 cri.go:89] found id: ""
	I0717 18:41:44.780066   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.780090   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:44.780098   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:44.780111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:44.829641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:44.829675   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:44.842587   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:44.842616   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:44.906331   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.906355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:44.906369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:44.983364   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:44.983400   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:44.671557   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:47.170565   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:45.827780   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:48.324627   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:46.096988   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:48.596469   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:47.525057   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:47.538586   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:47.538639   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:47.574805   80857 cri.go:89] found id: ""
	I0717 18:41:47.574832   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.574843   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:47.574849   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:47.574906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:47.609576   80857 cri.go:89] found id: ""
	I0717 18:41:47.609603   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.609611   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:47.609617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:47.609662   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:47.643899   80857 cri.go:89] found id: ""
	I0717 18:41:47.643927   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.643936   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:47.643941   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:47.643990   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:47.680365   80857 cri.go:89] found id: ""
	I0717 18:41:47.680404   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.680412   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:47.680418   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:47.680475   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:47.719038   80857 cri.go:89] found id: ""
	I0717 18:41:47.719061   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.719069   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:47.719074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:47.719118   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:47.751708   80857 cri.go:89] found id: ""
	I0717 18:41:47.751735   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.751744   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:47.751750   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:47.751807   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:47.789803   80857 cri.go:89] found id: ""
	I0717 18:41:47.789838   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.789850   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:47.789858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:47.789921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:47.821450   80857 cri.go:89] found id: ""
	I0717 18:41:47.821477   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.821487   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:47.821496   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:47.821511   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:47.886501   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:47.886526   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:47.886544   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:47.960142   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:47.960177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:47.995012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:47.995046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:48.046848   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:48.046884   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:49.670208   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:52.169471   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.824876   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:53.324628   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.597215   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:53.096114   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.560990   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:50.574906   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:50.575051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:50.607647   80857 cri.go:89] found id: ""
	I0717 18:41:50.607674   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.607687   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:50.607696   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:50.607756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:50.640621   80857 cri.go:89] found id: ""
	I0717 18:41:50.640651   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.640660   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:50.640667   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:50.640741   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:50.675269   80857 cri.go:89] found id: ""
	I0717 18:41:50.675293   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.675303   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:50.675313   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:50.675369   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:50.707915   80857 cri.go:89] found id: ""
	I0717 18:41:50.707938   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.707946   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:50.707951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:50.708006   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:50.741149   80857 cri.go:89] found id: ""
	I0717 18:41:50.741170   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.741178   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:50.741184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:50.741288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:50.772768   80857 cri.go:89] found id: ""
	I0717 18:41:50.772792   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.772799   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:50.772804   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:50.772854   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:50.804996   80857 cri.go:89] found id: ""
	I0717 18:41:50.805018   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.805028   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:50.805035   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:50.805094   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:50.838933   80857 cri.go:89] found id: ""
	I0717 18:41:50.838960   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.838971   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:50.838982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:50.838997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:50.886415   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:50.886444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:50.899024   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:50.899049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:50.965388   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:50.965416   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:50.965434   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:51.044449   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:51.044490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.580749   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:53.593759   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:53.593841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:53.626541   80857 cri.go:89] found id: ""
	I0717 18:41:53.626573   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.626582   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:53.626588   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:53.626645   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:53.658492   80857 cri.go:89] found id: ""
	I0717 18:41:53.658520   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.658529   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:53.658537   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:53.658600   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:53.694546   80857 cri.go:89] found id: ""
	I0717 18:41:53.694582   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.694590   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:53.694595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:53.694650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:53.727028   80857 cri.go:89] found id: ""
	I0717 18:41:53.727053   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.727061   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:53.727067   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:53.727129   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:53.762869   80857 cri.go:89] found id: ""
	I0717 18:41:53.762897   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.762906   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:53.762913   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:53.762976   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:53.794133   80857 cri.go:89] found id: ""
	I0717 18:41:53.794158   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.794166   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:53.794172   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:53.794225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:53.828432   80857 cri.go:89] found id: ""
	I0717 18:41:53.828463   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.828473   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:53.828484   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:53.828546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:53.863316   80857 cri.go:89] found id: ""
	I0717 18:41:53.863345   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.863353   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:53.863362   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:53.863384   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.897353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:53.897380   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:53.944213   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:53.944242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:53.957484   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:53.957509   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:54.025962   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:54.025992   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:54.026006   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:54.170642   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:56.672407   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:55.325017   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:57.823877   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:55.596492   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:58.096397   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:56.609502   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:56.621849   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:56.621913   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:56.657469   80857 cri.go:89] found id: ""
	I0717 18:41:56.657498   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.657510   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:56.657517   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:56.657579   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:56.691298   80857 cri.go:89] found id: ""
	I0717 18:41:56.691320   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.691327   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:56.691332   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:56.691386   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:56.723305   80857 cri.go:89] found id: ""
	I0717 18:41:56.723334   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.723344   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:56.723352   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:56.723417   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:56.755893   80857 cri.go:89] found id: ""
	I0717 18:41:56.755918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.755926   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:56.755931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:56.755982   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:56.787777   80857 cri.go:89] found id: ""
	I0717 18:41:56.787807   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.787819   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:56.787828   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:56.787894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:56.821126   80857 cri.go:89] found id: ""
	I0717 18:41:56.821152   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.821163   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:56.821170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:56.821228   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:56.855894   80857 cri.go:89] found id: ""
	I0717 18:41:56.855918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.855926   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:56.855931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:56.855980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:56.893483   80857 cri.go:89] found id: ""
	I0717 18:41:56.893505   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.893512   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:56.893521   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:56.893532   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:56.945355   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:56.945385   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:56.958426   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:56.958451   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:57.025542   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:57.025571   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:57.025585   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:57.100497   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:57.100528   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:59.636400   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:59.648517   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:59.648571   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:59.683954   80857 cri.go:89] found id: ""
	I0717 18:41:59.683978   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.683988   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:59.683995   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:59.684065   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:59.719135   80857 cri.go:89] found id: ""
	I0717 18:41:59.719162   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.719172   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:59.719179   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:59.719243   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:59.755980   80857 cri.go:89] found id: ""
	I0717 18:41:59.756012   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.756023   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:59.756030   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:59.756091   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:59.788147   80857 cri.go:89] found id: ""
	I0717 18:41:59.788176   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.788185   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:59.788191   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:59.788239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:59.819646   80857 cri.go:89] found id: ""
	I0717 18:41:59.819670   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.819679   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:59.819685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:59.819738   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:59.852487   80857 cri.go:89] found id: ""
	I0717 18:41:59.852508   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.852516   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:59.852521   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:59.852586   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:59.883761   80857 cri.go:89] found id: ""
	I0717 18:41:59.883794   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.883805   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:59.883812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:59.883870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:59.914854   80857 cri.go:89] found id: ""
	I0717 18:41:59.914882   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.914889   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:59.914896   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:59.914909   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:59.995619   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:59.995650   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:00.034444   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:00.034472   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:59.172253   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:01.670422   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:59.824347   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:01.824444   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:03.826580   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:00.096457   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:02.596587   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:00.084278   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:00.084308   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:00.097771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:00.097796   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:00.161753   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:02.662134   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:02.676200   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:02.676277   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:02.711606   80857 cri.go:89] found id: ""
	I0717 18:42:02.711640   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.711652   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:02.711659   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:02.711711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:02.744704   80857 cri.go:89] found id: ""
	I0717 18:42:02.744728   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.744735   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:02.744741   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:02.744800   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:02.778815   80857 cri.go:89] found id: ""
	I0717 18:42:02.778846   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.778859   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:02.778868   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:02.778936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:02.810896   80857 cri.go:89] found id: ""
	I0717 18:42:02.810928   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.810941   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:02.810950   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:02.811024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:02.843868   80857 cri.go:89] found id: ""
	I0717 18:42:02.843892   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.843903   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:02.843910   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:02.843972   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:02.876311   80857 cri.go:89] found id: ""
	I0717 18:42:02.876338   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.876348   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:02.876356   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:02.876420   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:02.910752   80857 cri.go:89] found id: ""
	I0717 18:42:02.910776   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.910784   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:02.910789   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:02.910835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:02.947286   80857 cri.go:89] found id: ""
	I0717 18:42:02.947318   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.947328   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:02.947337   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:02.947351   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:02.999512   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:02.999542   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:03.014063   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:03.014094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:03.081822   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:03.081844   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:03.081858   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:03.161088   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:03.161117   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:04.171168   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:06.669508   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:06.324608   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:08.825084   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:04.597129   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:07.098716   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:05.699198   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:05.711597   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:05.711654   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:05.749653   80857 cri.go:89] found id: ""
	I0717 18:42:05.749684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.749694   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:05.749703   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:05.749757   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:05.785095   80857 cri.go:89] found id: ""
	I0717 18:42:05.785118   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.785125   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:05.785134   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:05.785179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:05.818085   80857 cri.go:89] found id: ""
	I0717 18:42:05.818111   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.818119   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:05.818125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:05.818171   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:05.851872   80857 cri.go:89] found id: ""
	I0717 18:42:05.851895   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.851902   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:05.851907   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:05.851958   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:05.883924   80857 cri.go:89] found id: ""
	I0717 18:42:05.883948   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.883958   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:05.883965   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:05.884025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:05.916365   80857 cri.go:89] found id: ""
	I0717 18:42:05.916396   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.916407   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:05.916414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:05.916473   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:05.950656   80857 cri.go:89] found id: ""
	I0717 18:42:05.950684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.950695   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:05.950701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:05.950762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:05.992132   80857 cri.go:89] found id: ""
	I0717 18:42:05.992160   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.992169   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:05.992177   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:05.992190   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:06.042162   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:06.042192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:06.055594   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:06.055619   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:06.123007   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:06.123038   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:06.123068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:06.200429   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:06.200460   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:08.739039   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:08.751520   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:08.751575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:08.783765   80857 cri.go:89] found id: ""
	I0717 18:42:08.783794   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.783805   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:08.783812   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:08.783864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:08.815200   80857 cri.go:89] found id: ""
	I0717 18:42:08.815227   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.815236   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:08.815242   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:08.815289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:08.848970   80857 cri.go:89] found id: ""
	I0717 18:42:08.849002   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.849012   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:08.849021   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:08.849084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:08.881832   80857 cri.go:89] found id: ""
	I0717 18:42:08.881859   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.881866   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:08.881874   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:08.881922   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:08.913119   80857 cri.go:89] found id: ""
	I0717 18:42:08.913142   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.913149   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:08.913155   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:08.913201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:08.947471   80857 cri.go:89] found id: ""
	I0717 18:42:08.947499   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.947509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:08.947515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:08.947570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:08.979570   80857 cri.go:89] found id: ""
	I0717 18:42:08.979599   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.979609   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:08.979615   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:08.979670   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:09.012960   80857 cri.go:89] found id: ""
	I0717 18:42:09.012991   80857 logs.go:276] 0 containers: []
	W0717 18:42:09.013002   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:09.013012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:09.013027   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:09.065732   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:09.065769   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:09.079572   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:09.079602   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:09.151737   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:09.151754   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:09.151766   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:09.230185   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:09.230218   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:08.670185   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:10.671336   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.325340   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:13.824087   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:09.595757   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.596784   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:14.096765   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.767189   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:11.780044   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:11.780115   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:11.812700   80857 cri.go:89] found id: ""
	I0717 18:42:11.812722   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.812730   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:11.812736   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:11.812781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:11.846855   80857 cri.go:89] found id: ""
	I0717 18:42:11.846883   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.846893   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:11.846900   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:11.846962   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:11.877671   80857 cri.go:89] found id: ""
	I0717 18:42:11.877700   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.877710   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:11.877716   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:11.877767   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:11.908703   80857 cri.go:89] found id: ""
	I0717 18:42:11.908728   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.908735   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:11.908740   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:11.908786   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:11.942191   80857 cri.go:89] found id: ""
	I0717 18:42:11.942218   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.942225   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:11.942231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:11.942284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:11.974751   80857 cri.go:89] found id: ""
	I0717 18:42:11.974782   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.974798   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:11.974807   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:11.974876   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:12.006287   80857 cri.go:89] found id: ""
	I0717 18:42:12.006317   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.006327   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:12.006335   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:12.006396   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:12.036524   80857 cri.go:89] found id: ""
	I0717 18:42:12.036546   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.036554   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:12.036575   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:12.036599   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:12.085073   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:12.085109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:12.098908   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:12.098937   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:12.161665   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:12.161687   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:12.161702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:12.240349   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:12.240401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:14.781101   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:14.794081   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:14.794149   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:14.828975   80857 cri.go:89] found id: ""
	I0717 18:42:14.829003   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.829013   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:14.829021   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:14.829072   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:14.864858   80857 cri.go:89] found id: ""
	I0717 18:42:14.864886   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.864896   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:14.864903   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:14.864986   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:14.897961   80857 cri.go:89] found id: ""
	I0717 18:42:14.897983   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.897991   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:14.897996   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:14.898041   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:14.935499   80857 cri.go:89] found id: ""
	I0717 18:42:14.935521   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.935529   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:14.935534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:14.935591   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:14.967581   80857 cri.go:89] found id: ""
	I0717 18:42:14.967605   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.967621   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:14.967629   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:14.967688   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:15.001844   80857 cri.go:89] found id: ""
	I0717 18:42:15.001876   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.001888   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:15.001894   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:15.001942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:15.038940   80857 cri.go:89] found id: ""
	I0717 18:42:15.038967   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.038977   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:15.038985   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:15.039043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:13.170111   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:15.669712   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:17.669916   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:16.325511   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:18.823820   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:16.597587   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:19.096905   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:15.072636   80857 cri.go:89] found id: ""
	I0717 18:42:15.072665   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.072677   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:15.072688   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:15.072703   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:15.124889   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:15.124934   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:15.138661   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:15.138691   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:15.208762   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:15.208791   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:15.208806   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:15.281302   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:15.281336   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:17.817136   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:17.831013   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:17.831078   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:17.867065   80857 cri.go:89] found id: ""
	I0717 18:42:17.867091   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.867101   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:17.867108   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:17.867166   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:17.904143   80857 cri.go:89] found id: ""
	I0717 18:42:17.904171   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.904180   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:17.904188   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:17.904248   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:17.937450   80857 cri.go:89] found id: ""
	I0717 18:42:17.937478   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.937487   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:17.937492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:17.937556   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:17.970650   80857 cri.go:89] found id: ""
	I0717 18:42:17.970679   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.970689   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:17.970696   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:17.970754   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:18.002329   80857 cri.go:89] found id: ""
	I0717 18:42:18.002355   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.002364   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:18.002371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:18.002430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:18.035253   80857 cri.go:89] found id: ""
	I0717 18:42:18.035278   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.035288   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:18.035295   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:18.035356   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:18.070386   80857 cri.go:89] found id: ""
	I0717 18:42:18.070419   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.070431   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:18.070439   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:18.070507   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:18.106148   80857 cri.go:89] found id: ""
	I0717 18:42:18.106170   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.106177   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:18.106185   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:18.106201   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:18.157359   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:18.157390   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:18.171757   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:18.171782   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:18.242795   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:18.242818   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:18.242831   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:18.316221   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:18.316255   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:19.670562   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:22.171111   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:20.824266   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:22.824366   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:21.596773   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:24.098051   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:20.857953   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:20.870813   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:20.870882   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:20.906033   80857 cri.go:89] found id: ""
	I0717 18:42:20.906065   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.906075   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:20.906083   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:20.906142   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:20.942292   80857 cri.go:89] found id: ""
	I0717 18:42:20.942316   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.942335   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:20.942342   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:20.942403   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:20.985113   80857 cri.go:89] found id: ""
	I0717 18:42:20.985143   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.985151   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:20.985157   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:20.985217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:21.021807   80857 cri.go:89] found id: ""
	I0717 18:42:21.021834   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.021842   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:21.021847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:21.021906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:21.061924   80857 cri.go:89] found id: ""
	I0717 18:42:21.061949   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.061961   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:21.061969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:21.062025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:21.098890   80857 cri.go:89] found id: ""
	I0717 18:42:21.098916   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.098927   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:21.098935   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:21.098991   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:21.132576   80857 cri.go:89] found id: ""
	I0717 18:42:21.132612   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.132621   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:21.132627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:21.132687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:21.167723   80857 cri.go:89] found id: ""
	I0717 18:42:21.167765   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.167778   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:21.167788   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:21.167803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:21.220427   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:21.220461   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:21.233191   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:21.233216   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:21.304462   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:21.304481   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:21.304498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:21.386887   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:21.386925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:23.926518   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:23.940470   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:23.940534   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:23.976739   80857 cri.go:89] found id: ""
	I0717 18:42:23.976763   80857 logs.go:276] 0 containers: []
	W0717 18:42:23.976773   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:23.976778   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:23.976838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:24.007575   80857 cri.go:89] found id: ""
	I0717 18:42:24.007603   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.007612   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:24.007617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:24.007671   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:24.040430   80857 cri.go:89] found id: ""
	I0717 18:42:24.040455   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.040463   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:24.040468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:24.040581   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:24.071602   80857 cri.go:89] found id: ""
	I0717 18:42:24.071629   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.071638   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:24.071644   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:24.071705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:24.109570   80857 cri.go:89] found id: ""
	I0717 18:42:24.109595   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.109602   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:24.109607   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:24.109667   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:24.144284   80857 cri.go:89] found id: ""
	I0717 18:42:24.144305   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.144328   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:24.144333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:24.144382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:24.179441   80857 cri.go:89] found id: ""
	I0717 18:42:24.179467   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.179474   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:24.179479   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:24.179545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:24.222100   80857 cri.go:89] found id: ""
	I0717 18:42:24.222133   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.222143   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:24.222159   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:24.222175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:24.273181   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:24.273215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:24.285835   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:24.285861   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:24.357804   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:24.357826   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:24.357839   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:24.437270   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:24.437310   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:24.670033   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.671014   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:24.824543   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:27.325296   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.597795   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:29.098055   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.979543   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:26.992443   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:26.992497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:27.025520   80857 cri.go:89] found id: ""
	I0717 18:42:27.025548   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.025560   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:27.025567   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:27.025630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:27.059971   80857 cri.go:89] found id: ""
	I0717 18:42:27.060002   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.060011   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:27.060016   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:27.060068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:27.091370   80857 cri.go:89] found id: ""
	I0717 18:42:27.091397   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.091407   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:27.091415   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:27.091468   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:27.123736   80857 cri.go:89] found id: ""
	I0717 18:42:27.123768   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.123779   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:27.123786   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:27.123849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:27.156155   80857 cri.go:89] found id: ""
	I0717 18:42:27.156177   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.156185   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:27.156190   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:27.156239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:27.190701   80857 cri.go:89] found id: ""
	I0717 18:42:27.190729   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.190741   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:27.190749   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:27.190825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:27.222093   80857 cri.go:89] found id: ""
	I0717 18:42:27.222119   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.222130   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:27.222137   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:27.222199   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:27.258789   80857 cri.go:89] found id: ""
	I0717 18:42:27.258813   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.258824   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:27.258834   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:27.258848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:27.307033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:27.307068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:27.321181   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:27.321209   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:27.390560   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:27.390593   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:27.390613   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:27.464352   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:27.464389   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:30.005732   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:30.019088   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:30.019160   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:29.170578   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.670221   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:29.327610   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.824292   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:33.824392   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.595937   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:33.597622   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:30.052733   80857 cri.go:89] found id: ""
	I0717 18:42:30.052757   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.052765   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:30.052775   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:30.052836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:30.087683   80857 cri.go:89] found id: ""
	I0717 18:42:30.087711   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.087722   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:30.087729   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:30.087774   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:30.124371   80857 cri.go:89] found id: ""
	I0717 18:42:30.124404   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.124416   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:30.124432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:30.124487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:30.160081   80857 cri.go:89] found id: ""
	I0717 18:42:30.160107   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.160115   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:30.160122   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:30.160173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:30.194420   80857 cri.go:89] found id: ""
	I0717 18:42:30.194447   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.194456   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:30.194464   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:30.194522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:30.229544   80857 cri.go:89] found id: ""
	I0717 18:42:30.229570   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.229584   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:30.229591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:30.229650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:30.264164   80857 cri.go:89] found id: ""
	I0717 18:42:30.264193   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.264204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:30.264211   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:30.264266   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:30.296958   80857 cri.go:89] found id: ""
	I0717 18:42:30.296986   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.296996   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:30.297008   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:30.297049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:30.348116   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:30.348145   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:30.361373   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:30.361401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:30.429601   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:30.429620   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:30.429634   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:30.507718   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:30.507752   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:33.045539   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:33.058149   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:33.058219   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:33.088675   80857 cri.go:89] found id: ""
	I0717 18:42:33.088702   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.088710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:33.088717   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:33.088773   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:33.121269   80857 cri.go:89] found id: ""
	I0717 18:42:33.121297   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.121308   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:33.121315   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:33.121375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:33.156144   80857 cri.go:89] found id: ""
	I0717 18:42:33.156173   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.156184   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:33.156192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:33.156257   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:33.188559   80857 cri.go:89] found id: ""
	I0717 18:42:33.188585   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.188597   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:33.188603   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:33.188651   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:33.219650   80857 cri.go:89] found id: ""
	I0717 18:42:33.219672   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.219680   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:33.219686   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:33.219746   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:33.249704   80857 cri.go:89] found id: ""
	I0717 18:42:33.249728   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.249737   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:33.249742   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:33.249793   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:33.283480   80857 cri.go:89] found id: ""
	I0717 18:42:33.283503   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.283511   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:33.283516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:33.283560   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:33.314577   80857 cri.go:89] found id: ""
	I0717 18:42:33.314620   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.314629   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:33.314638   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:33.314649   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:33.363458   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:33.363491   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:33.377240   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:33.377267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:33.442939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:33.442961   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:33.442976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:33.522422   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:33.522456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:34.170638   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.171034   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.324780   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:38.824832   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.097788   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:38.596054   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.063823   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:36.078272   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:36.078342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:36.111460   80857 cri.go:89] found id: ""
	I0717 18:42:36.111494   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.111502   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:36.111509   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:36.111562   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:36.144191   80857 cri.go:89] found id: ""
	I0717 18:42:36.144222   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.144232   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:36.144239   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:36.144306   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:36.177247   80857 cri.go:89] found id: ""
	I0717 18:42:36.177277   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.177288   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:36.177294   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:36.177350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:36.213390   80857 cri.go:89] found id: ""
	I0717 18:42:36.213419   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.213427   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:36.213433   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:36.213493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:36.246775   80857 cri.go:89] found id: ""
	I0717 18:42:36.246799   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.246807   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:36.246812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:36.246870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:36.282441   80857 cri.go:89] found id: ""
	I0717 18:42:36.282463   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.282470   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:36.282476   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:36.282529   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:36.314178   80857 cri.go:89] found id: ""
	I0717 18:42:36.314203   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.314211   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:36.314216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:36.314265   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:36.353705   80857 cri.go:89] found id: ""
	I0717 18:42:36.353730   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.353737   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:36.353746   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:36.353758   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:36.370866   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:36.370894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:36.463660   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:36.463693   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:36.463710   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:36.540337   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:36.540371   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:36.575770   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:36.575801   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.128675   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:39.141187   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:39.141255   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:39.175960   80857 cri.go:89] found id: ""
	I0717 18:42:39.175982   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.175989   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:39.175994   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:39.176051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:39.209442   80857 cri.go:89] found id: ""
	I0717 18:42:39.209472   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.209483   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:39.209490   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:39.209552   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:39.243225   80857 cri.go:89] found id: ""
	I0717 18:42:39.243249   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.243256   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:39.243262   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:39.243309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:39.277369   80857 cri.go:89] found id: ""
	I0717 18:42:39.277396   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.277407   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:39.277414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:39.277464   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:39.310522   80857 cri.go:89] found id: ""
	I0717 18:42:39.310552   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.310563   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:39.310570   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:39.310637   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:39.344186   80857 cri.go:89] found id: ""
	I0717 18:42:39.344208   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.344216   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:39.344221   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:39.344279   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:39.375329   80857 cri.go:89] found id: ""
	I0717 18:42:39.375354   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.375366   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:39.375372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:39.375419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:39.412629   80857 cri.go:89] found id: ""
	I0717 18:42:39.412659   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.412668   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:39.412679   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:39.412696   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:39.447607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:39.447644   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.498981   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:39.499013   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:39.512380   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:39.512409   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:39.580396   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:39.580415   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:39.580428   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:38.670213   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:41.170284   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:40.825257   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:43.324155   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:40.596267   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:42.597199   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:42.158145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:42.177450   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:42.177522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:42.222849   80857 cri.go:89] found id: ""
	I0717 18:42:42.222880   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.222890   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:42.222897   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:42.222954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:42.252712   80857 cri.go:89] found id: ""
	I0717 18:42:42.252742   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.252752   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:42.252757   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:42.252802   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:42.283764   80857 cri.go:89] found id: ""
	I0717 18:42:42.283789   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.283799   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:42.283806   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:42.283864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:42.317243   80857 cri.go:89] found id: ""
	I0717 18:42:42.317270   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.317281   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:42.317288   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:42.317350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:42.349972   80857 cri.go:89] found id: ""
	I0717 18:42:42.350000   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.350010   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:42.350017   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:42.350074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:42.382111   80857 cri.go:89] found id: ""
	I0717 18:42:42.382146   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.382158   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:42.382165   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:42.382223   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:42.414669   80857 cri.go:89] found id: ""
	I0717 18:42:42.414692   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.414700   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:42.414705   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:42.414765   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:42.446533   80857 cri.go:89] found id: ""
	I0717 18:42:42.446571   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.446579   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:42.446588   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:42.446603   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:42.522142   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:42.522165   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:42.522177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:42.602456   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:42.602493   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:42.642192   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:42.642221   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:42.695016   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:42.695046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:43.170955   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.670631   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.325626   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:47.824543   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.097244   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:47.097783   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.208310   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:45.221821   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:45.221901   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:45.256887   80857 cri.go:89] found id: ""
	I0717 18:42:45.256914   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.256924   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:45.256930   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:45.256999   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:45.293713   80857 cri.go:89] found id: ""
	I0717 18:42:45.293735   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.293748   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:45.293753   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:45.293799   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:45.328790   80857 cri.go:89] found id: ""
	I0717 18:42:45.328815   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.328824   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:45.328833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:45.328880   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:45.364977   80857 cri.go:89] found id: ""
	I0717 18:42:45.365004   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.365014   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:45.365022   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:45.365084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:45.401131   80857 cri.go:89] found id: ""
	I0717 18:42:45.401157   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.401164   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:45.401170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:45.401217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:45.432252   80857 cri.go:89] found id: ""
	I0717 18:42:45.432279   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.432287   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:45.432293   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:45.432338   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:45.464636   80857 cri.go:89] found id: ""
	I0717 18:42:45.464659   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.464667   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:45.464674   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:45.464728   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:45.494884   80857 cri.go:89] found id: ""
	I0717 18:42:45.494913   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.494924   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:45.494935   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:45.494949   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:45.546578   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:45.546610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:45.559622   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:45.559647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:45.622094   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:45.622114   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:45.622126   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:45.699772   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:45.699814   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.241667   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:48.254205   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:48.254270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:48.293258   80857 cri.go:89] found id: ""
	I0717 18:42:48.293287   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.293298   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:48.293305   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:48.293362   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:48.328778   80857 cri.go:89] found id: ""
	I0717 18:42:48.328807   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.328818   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:48.328824   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:48.328884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:48.360230   80857 cri.go:89] found id: ""
	I0717 18:42:48.360256   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.360266   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:48.360276   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:48.360335   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:48.397770   80857 cri.go:89] found id: ""
	I0717 18:42:48.397797   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.397808   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:48.397815   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:48.397873   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:48.430912   80857 cri.go:89] found id: ""
	I0717 18:42:48.430938   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.430946   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:48.430956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:48.431015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:48.462659   80857 cri.go:89] found id: ""
	I0717 18:42:48.462688   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.462699   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:48.462706   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:48.462771   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:48.497554   80857 cri.go:89] found id: ""
	I0717 18:42:48.497584   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.497594   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:48.497601   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:48.497665   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:48.529524   80857 cri.go:89] found id: ""
	I0717 18:42:48.529547   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.529555   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:48.529564   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:48.529577   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:48.601265   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:48.601285   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:48.601297   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:48.678045   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:48.678075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.718565   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:48.718598   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:48.769923   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:48.769956   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:48.169777   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:50.669643   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.670334   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:50.324997   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.824163   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:49.596927   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.097602   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:51.282887   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:51.295778   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:51.295848   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:51.329324   80857 cri.go:89] found id: ""
	I0717 18:42:51.329351   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.329361   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:51.329369   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:51.329434   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:51.362013   80857 cri.go:89] found id: ""
	I0717 18:42:51.362042   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.362052   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:51.362059   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:51.362120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:51.395039   80857 cri.go:89] found id: ""
	I0717 18:42:51.395069   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.395080   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:51.395087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:51.395155   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:51.427683   80857 cri.go:89] found id: ""
	I0717 18:42:51.427709   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.427717   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:51.427722   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:51.427772   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:51.461683   80857 cri.go:89] found id: ""
	I0717 18:42:51.461706   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.461718   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:51.461723   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:51.461769   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:51.495780   80857 cri.go:89] found id: ""
	I0717 18:42:51.495802   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.495810   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:51.495816   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:51.495867   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:51.527541   80857 cri.go:89] found id: ""
	I0717 18:42:51.527573   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.527583   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:51.527591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:51.527648   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:51.567947   80857 cri.go:89] found id: ""
	I0717 18:42:51.567975   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.567987   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:51.567997   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:51.568014   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:51.620083   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:51.620109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:51.632823   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:51.632848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:51.705731   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:51.705753   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:51.705767   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:51.781969   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:51.782005   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.318011   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:54.331886   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:54.331942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:54.362935   80857 cri.go:89] found id: ""
	I0717 18:42:54.362962   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.362972   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:54.362979   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:54.363032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:54.396153   80857 cri.go:89] found id: ""
	I0717 18:42:54.396180   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.396191   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:54.396198   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:54.396259   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:54.433123   80857 cri.go:89] found id: ""
	I0717 18:42:54.433150   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.433160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:54.433168   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:54.433224   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:54.465034   80857 cri.go:89] found id: ""
	I0717 18:42:54.465064   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.465079   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:54.465087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:54.465200   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:54.496200   80857 cri.go:89] found id: ""
	I0717 18:42:54.496250   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.496263   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:54.496271   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:54.496332   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:54.528618   80857 cri.go:89] found id: ""
	I0717 18:42:54.528646   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.528656   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:54.528664   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:54.528724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:54.563018   80857 cri.go:89] found id: ""
	I0717 18:42:54.563042   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.563052   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:54.563059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:54.563114   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:54.595221   80857 cri.go:89] found id: ""
	I0717 18:42:54.595256   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.595266   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:54.595275   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:54.595291   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:54.608193   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:54.608220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:54.673755   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:54.673778   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:54.673793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:54.756443   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:54.756483   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.792670   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:54.792700   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:55.169224   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.169851   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:54.824614   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.324611   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:54.596824   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:56.597638   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:59.096992   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.344637   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:57.357003   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:57.357068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:57.389230   80857 cri.go:89] found id: ""
	I0717 18:42:57.389261   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.389271   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:57.389278   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:57.389372   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:57.421529   80857 cri.go:89] found id: ""
	I0717 18:42:57.421553   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.421571   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:57.421578   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:57.421642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:57.455154   80857 cri.go:89] found id: ""
	I0717 18:42:57.455186   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.455193   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:57.455199   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:57.455245   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:57.490576   80857 cri.go:89] found id: ""
	I0717 18:42:57.490608   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.490621   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:57.490630   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:57.490693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:57.523972   80857 cri.go:89] found id: ""
	I0717 18:42:57.524010   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.524023   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:57.524033   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:57.524092   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:57.558106   80857 cri.go:89] found id: ""
	I0717 18:42:57.558132   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.558140   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:57.558145   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:57.558201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:57.591009   80857 cri.go:89] found id: ""
	I0717 18:42:57.591035   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.591045   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:57.591051   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:57.591110   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:57.624564   80857 cri.go:89] found id: ""
	I0717 18:42:57.624592   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.624601   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:57.624612   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:57.624627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:57.699833   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:57.699868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:57.737029   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:57.737066   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:57.790562   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:57.790605   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:57.804935   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:57.804984   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:57.873081   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:59.170203   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.170348   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:59.325020   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.824876   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:03.825020   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.596885   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:03.597698   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:00.374166   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:00.388370   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:00.388443   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:00.421228   80857 cri.go:89] found id: ""
	I0717 18:43:00.421257   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.421268   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:00.421276   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:00.421325   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:00.451819   80857 cri.go:89] found id: ""
	I0717 18:43:00.451846   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.451856   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:00.451862   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:00.451917   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:00.482960   80857 cri.go:89] found id: ""
	I0717 18:43:00.482993   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.483004   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:00.483015   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:00.483074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:00.515860   80857 cri.go:89] found id: ""
	I0717 18:43:00.515882   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.515892   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:00.515899   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:00.515954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:00.548177   80857 cri.go:89] found id: ""
	I0717 18:43:00.548202   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.548212   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:00.548217   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:00.548275   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:00.580759   80857 cri.go:89] found id: ""
	I0717 18:43:00.580782   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.580790   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:00.580795   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:00.580847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:00.618661   80857 cri.go:89] found id: ""
	I0717 18:43:00.618683   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.618691   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:00.618699   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:00.618742   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:00.650503   80857 cri.go:89] found id: ""
	I0717 18:43:00.650528   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.650535   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:00.650544   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:00.650555   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:00.699668   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:00.699697   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:00.714086   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:00.714114   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:00.777051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:00.777087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:00.777105   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:00.859238   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:00.859274   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:03.399050   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:03.412565   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:03.412626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:03.445993   80857 cri.go:89] found id: ""
	I0717 18:43:03.446026   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.446038   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:03.446045   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:03.446101   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:03.481251   80857 cri.go:89] found id: ""
	I0717 18:43:03.481285   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.481297   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:03.481305   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:03.481371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:03.514406   80857 cri.go:89] found id: ""
	I0717 18:43:03.514433   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.514441   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:03.514447   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:03.514497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:03.546217   80857 cri.go:89] found id: ""
	I0717 18:43:03.546248   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.546258   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:03.546266   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:03.546327   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:03.577287   80857 cri.go:89] found id: ""
	I0717 18:43:03.577318   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.577333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:03.577340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:03.577394   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:03.610080   80857 cri.go:89] found id: ""
	I0717 18:43:03.610101   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.610109   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:03.610114   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:03.610159   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:03.643753   80857 cri.go:89] found id: ""
	I0717 18:43:03.643777   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.643787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:03.643792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:03.643849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:03.676290   80857 cri.go:89] found id: ""
	I0717 18:43:03.676338   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.676345   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:03.676353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:03.676364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:03.727818   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:03.727850   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:03.740752   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:03.740784   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:03.810465   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:03.810485   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:03.810499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:03.889326   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:03.889359   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:03.170473   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:05.170754   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:07.172145   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.323855   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:08.325019   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.096213   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:08.096443   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.426949   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:06.440007   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:06.440079   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:06.471689   80857 cri.go:89] found id: ""
	I0717 18:43:06.471715   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.471724   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:06.471729   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:06.471775   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:06.503818   80857 cri.go:89] found id: ""
	I0717 18:43:06.503840   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.503847   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:06.503853   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:06.503900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:06.534733   80857 cri.go:89] found id: ""
	I0717 18:43:06.534755   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.534763   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:06.534768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:06.534818   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:06.565388   80857 cri.go:89] found id: ""
	I0717 18:43:06.565414   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.565421   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:06.565431   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:06.565480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:06.597739   80857 cri.go:89] found id: ""
	I0717 18:43:06.597764   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.597775   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:06.597782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:06.597847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:06.629823   80857 cri.go:89] found id: ""
	I0717 18:43:06.629845   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.629853   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:06.629859   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:06.629921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:06.663753   80857 cri.go:89] found id: ""
	I0717 18:43:06.663779   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.663787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:06.663792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:06.663838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:06.700868   80857 cri.go:89] found id: ""
	I0717 18:43:06.700896   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.700906   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:06.700917   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:06.700932   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:06.753064   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:06.753097   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:06.765845   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:06.765868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:06.834691   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:06.834715   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:06.834729   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:06.908650   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:06.908682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:09.450804   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:09.463369   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:09.463452   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:09.506992   80857 cri.go:89] found id: ""
	I0717 18:43:09.507020   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.507028   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:09.507035   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:09.507093   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:09.543083   80857 cri.go:89] found id: ""
	I0717 18:43:09.543108   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.543116   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:09.543121   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:09.543174   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:09.576194   80857 cri.go:89] found id: ""
	I0717 18:43:09.576219   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.576226   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:09.576231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:09.576289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:09.610148   80857 cri.go:89] found id: ""
	I0717 18:43:09.610171   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.610178   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:09.610184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:09.610258   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:09.642217   80857 cri.go:89] found id: ""
	I0717 18:43:09.642246   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.642255   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:09.642263   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:09.642342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:09.678041   80857 cri.go:89] found id: ""
	I0717 18:43:09.678064   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.678073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:09.678079   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:09.678141   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:09.711162   80857 cri.go:89] found id: ""
	I0717 18:43:09.711193   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.711204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:09.711212   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:09.711272   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:09.746135   80857 cri.go:89] found id: ""
	I0717 18:43:09.746164   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.746175   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:09.746186   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:09.746197   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:09.799268   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:09.799303   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:09.811910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:09.811935   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:09.876939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:09.876982   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:09.876998   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:09.951468   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:09.951502   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:09.671086   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.170273   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:10.823628   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.824485   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:10.597216   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:13.096347   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.488926   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:12.501054   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:12.501112   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:12.532536   80857 cri.go:89] found id: ""
	I0717 18:43:12.532569   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.532577   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:12.532582   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:12.532629   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:12.565102   80857 cri.go:89] found id: ""
	I0717 18:43:12.565130   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.565141   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:12.565148   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:12.565208   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:12.600262   80857 cri.go:89] found id: ""
	I0717 18:43:12.600299   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.600309   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:12.600316   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:12.600366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:12.633950   80857 cri.go:89] found id: ""
	I0717 18:43:12.633980   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.633991   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:12.633998   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:12.634054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:12.673297   80857 cri.go:89] found id: ""
	I0717 18:43:12.673325   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.673338   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:12.673345   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:12.673406   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:12.707112   80857 cri.go:89] found id: ""
	I0717 18:43:12.707136   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.707144   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:12.707150   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:12.707206   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:12.746323   80857 cri.go:89] found id: ""
	I0717 18:43:12.746348   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.746358   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:12.746372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:12.746433   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:12.779470   80857 cri.go:89] found id: ""
	I0717 18:43:12.779496   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.779507   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:12.779518   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:12.779534   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:12.830156   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:12.830178   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:12.843707   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:12.843734   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:12.911849   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:12.911875   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:12.911891   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:12.986090   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:12.986122   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:14.170350   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:16.670284   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:14.824727   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:17.324146   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:15.096736   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:17.596689   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:15.523428   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:15.536012   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:15.536070   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:15.569179   80857 cri.go:89] found id: ""
	I0717 18:43:15.569208   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.569218   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:15.569225   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:15.569273   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:15.606727   80857 cri.go:89] found id: ""
	I0717 18:43:15.606749   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.606757   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:15.606763   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:15.606805   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:15.638842   80857 cri.go:89] found id: ""
	I0717 18:43:15.638873   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.638883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:15.638889   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:15.638939   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:15.671418   80857 cri.go:89] found id: ""
	I0717 18:43:15.671444   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.671453   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:15.671459   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:15.671517   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:15.704892   80857 cri.go:89] found id: ""
	I0717 18:43:15.704928   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.704937   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:15.704956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:15.705013   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:15.738478   80857 cri.go:89] found id: ""
	I0717 18:43:15.738502   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.738509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:15.738515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:15.738584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:15.771188   80857 cri.go:89] found id: ""
	I0717 18:43:15.771225   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.771237   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:15.771245   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:15.771303   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:15.807737   80857 cri.go:89] found id: ""
	I0717 18:43:15.807763   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.807770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:15.807779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:15.807790   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:15.861202   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:15.861234   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:15.874170   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:15.874200   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:15.938049   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:15.938073   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:15.938086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:16.025420   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:16.025456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:18.563320   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:18.575574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:18.575634   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:18.608673   80857 cri.go:89] found id: ""
	I0717 18:43:18.608700   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.608710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:18.608718   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:18.608782   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:18.641589   80857 cri.go:89] found id: ""
	I0717 18:43:18.641611   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.641618   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:18.641624   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:18.641679   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:18.672232   80857 cri.go:89] found id: ""
	I0717 18:43:18.672258   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.672268   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:18.672274   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:18.672331   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:18.706088   80857 cri.go:89] found id: ""
	I0717 18:43:18.706111   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.706118   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:18.706134   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:18.706179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:18.742475   80857 cri.go:89] found id: ""
	I0717 18:43:18.742503   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.742512   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:18.742518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:18.742575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:18.774141   80857 cri.go:89] found id: ""
	I0717 18:43:18.774169   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.774178   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:18.774183   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:18.774234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:18.806648   80857 cri.go:89] found id: ""
	I0717 18:43:18.806672   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.806679   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:18.806685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:18.806731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:18.838022   80857 cri.go:89] found id: ""
	I0717 18:43:18.838047   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.838054   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:18.838062   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:18.838076   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:18.903467   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:18.903487   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:18.903498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:18.980385   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:18.980432   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:19.020884   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:19.020914   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:19.073530   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:19.073574   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:19.169841   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.172793   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:19.824764   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.826081   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:20.095275   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:22.097120   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.587870   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:21.602130   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:21.602185   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:21.635373   80857 cri.go:89] found id: ""
	I0717 18:43:21.635401   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.635411   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:21.635418   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:21.635480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:21.667175   80857 cri.go:89] found id: ""
	I0717 18:43:21.667200   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.667209   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:21.667216   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:21.667267   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:21.705876   80857 cri.go:89] found id: ""
	I0717 18:43:21.705907   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.705918   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:21.705926   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:21.705988   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:21.753302   80857 cri.go:89] found id: ""
	I0717 18:43:21.753323   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.753330   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:21.753337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:21.753388   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:21.785363   80857 cri.go:89] found id: ""
	I0717 18:43:21.785390   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.785396   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:21.785402   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:21.785448   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:21.817517   80857 cri.go:89] found id: ""
	I0717 18:43:21.817545   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.817553   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:21.817560   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:21.817615   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:21.849451   80857 cri.go:89] found id: ""
	I0717 18:43:21.849478   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.849489   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:21.849497   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:21.849553   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:21.880032   80857 cri.go:89] found id: ""
	I0717 18:43:21.880055   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.880063   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:21.880073   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:21.880086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:21.928498   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:21.928530   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:21.941532   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:21.941565   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:22.014044   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:22.014066   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:22.014081   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:22.090789   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:22.090817   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:24.628401   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:24.643571   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:24.643642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:24.679262   80857 cri.go:89] found id: ""
	I0717 18:43:24.679288   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.679297   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:24.679303   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:24.679360   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:24.713043   80857 cri.go:89] found id: ""
	I0717 18:43:24.713073   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.713085   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:24.713092   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:24.713145   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:24.751459   80857 cri.go:89] found id: ""
	I0717 18:43:24.751496   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.751508   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:24.751518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:24.751584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:24.790793   80857 cri.go:89] found id: ""
	I0717 18:43:24.790820   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.790831   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:24.790838   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:24.790895   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:24.822909   80857 cri.go:89] found id: ""
	I0717 18:43:24.822936   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.822945   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:24.822953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:24.823016   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:24.855369   80857 cri.go:89] found id: ""
	I0717 18:43:24.855418   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.855455   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:24.855468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:24.855557   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:24.891080   80857 cri.go:89] found id: ""
	I0717 18:43:24.891110   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.891127   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:24.891133   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:24.891187   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:24.923679   80857 cri.go:89] found id: ""
	I0717 18:43:24.923812   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.923833   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:24.923847   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:24.923863   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:24.975469   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:24.975499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:24.988671   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:24.988702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 18:43:23.670616   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.171013   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:24.323858   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.324395   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:28.325125   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:24.596495   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.597134   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:29.096334   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	W0717 18:43:25.055191   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:25.055210   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:25.055223   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:25.138867   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:25.138900   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:27.678822   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:27.691422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:27.691483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:27.723979   80857 cri.go:89] found id: ""
	I0717 18:43:27.724008   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.724016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:27.724022   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:27.724067   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:27.756389   80857 cri.go:89] found id: ""
	I0717 18:43:27.756415   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.756423   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:27.756429   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:27.756476   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:27.787617   80857 cri.go:89] found id: ""
	I0717 18:43:27.787644   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.787652   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:27.787658   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:27.787705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:27.821688   80857 cri.go:89] found id: ""
	I0717 18:43:27.821716   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.821725   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:27.821732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:27.821787   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:27.855353   80857 cri.go:89] found id: ""
	I0717 18:43:27.855378   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.855386   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:27.855392   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:27.855439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:27.887885   80857 cri.go:89] found id: ""
	I0717 18:43:27.887909   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.887917   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:27.887923   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:27.887984   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:27.918797   80857 cri.go:89] found id: ""
	I0717 18:43:27.918820   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.918828   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:27.918833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:27.918884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:27.951255   80857 cri.go:89] found id: ""
	I0717 18:43:27.951283   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.951295   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:27.951306   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:27.951319   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:28.025476   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:28.025506   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:28.063994   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:28.064020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:28.117762   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:28.117805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:28.135688   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:28.135725   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:28.238770   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:28.172438   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.670703   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:32.674896   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.824443   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:33.324216   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:31.595533   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:33.597968   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.739930   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:30.754147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:30.754231   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:30.794454   80857 cri.go:89] found id: ""
	I0717 18:43:30.794479   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.794486   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:30.794491   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:30.794548   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:30.831643   80857 cri.go:89] found id: ""
	I0717 18:43:30.831666   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.831673   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:30.831678   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:30.831731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:30.863293   80857 cri.go:89] found id: ""
	I0717 18:43:30.863315   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.863323   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:30.863337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:30.863395   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:30.897830   80857 cri.go:89] found id: ""
	I0717 18:43:30.897859   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.897870   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:30.897877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:30.897929   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:30.933179   80857 cri.go:89] found id: ""
	I0717 18:43:30.933209   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.933220   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:30.933227   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:30.933289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:30.964730   80857 cri.go:89] found id: ""
	I0717 18:43:30.964759   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.964773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:30.964781   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:30.964825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:30.996330   80857 cri.go:89] found id: ""
	I0717 18:43:30.996353   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.996361   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:30.996367   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:30.996419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:31.028193   80857 cri.go:89] found id: ""
	I0717 18:43:31.028220   80857 logs.go:276] 0 containers: []
	W0717 18:43:31.028228   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:31.028237   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:31.028251   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:31.040465   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:31.040490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:31.108127   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:31.108150   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:31.108164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:31.187763   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:31.187797   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:31.224238   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:31.224266   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:33.776145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:33.790045   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:33.790108   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:33.823471   80857 cri.go:89] found id: ""
	I0717 18:43:33.823495   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.823505   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:33.823512   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:33.823568   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:33.860205   80857 cri.go:89] found id: ""
	I0717 18:43:33.860233   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.860243   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:33.860250   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:33.860298   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:33.895469   80857 cri.go:89] found id: ""
	I0717 18:43:33.895499   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.895509   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:33.895516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:33.895578   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:33.938483   80857 cri.go:89] found id: ""
	I0717 18:43:33.938517   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.938527   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:33.938534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:33.938596   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:33.973265   80857 cri.go:89] found id: ""
	I0717 18:43:33.973293   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.973303   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:33.973309   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:33.973382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:34.012669   80857 cri.go:89] found id: ""
	I0717 18:43:34.012696   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.012704   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:34.012710   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:34.012760   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:34.045522   80857 cri.go:89] found id: ""
	I0717 18:43:34.045547   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.045557   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:34.045564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:34.045636   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:34.082927   80857 cri.go:89] found id: ""
	I0717 18:43:34.082957   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.082968   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:34.082979   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:34.082993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:34.134133   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:34.134168   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:34.146814   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:34.146837   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:34.217050   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:34.217079   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:34.217094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:34.298572   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:34.298610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:35.169868   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:37.170083   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:35.324578   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:37.825006   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:36.096437   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:38.096991   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:36.838187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:36.850888   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:36.850948   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:36.883132   80857 cri.go:89] found id: ""
	I0717 18:43:36.883153   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.883160   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:36.883166   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:36.883209   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:36.918310   80857 cri.go:89] found id: ""
	I0717 18:43:36.918339   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.918348   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:36.918353   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:36.918411   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:36.949794   80857 cri.go:89] found id: ""
	I0717 18:43:36.949818   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.949825   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:36.949831   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:36.949889   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:36.980913   80857 cri.go:89] found id: ""
	I0717 18:43:36.980951   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.980962   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:36.980969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:36.981029   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:37.014295   80857 cri.go:89] found id: ""
	I0717 18:43:37.014322   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.014330   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:37.014336   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:37.014397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:37.048555   80857 cri.go:89] found id: ""
	I0717 18:43:37.048581   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.048589   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:37.048595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:37.048643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:37.080533   80857 cri.go:89] found id: ""
	I0717 18:43:37.080561   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.080571   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:37.080577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:37.080640   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:37.112919   80857 cri.go:89] found id: ""
	I0717 18:43:37.112952   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.112963   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:37.112973   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:37.112987   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:37.165012   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:37.165044   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:37.177860   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:37.177881   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:37.244776   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:37.244806   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:37.244824   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:37.322949   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:37.322976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:39.861056   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:39.884509   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:39.884592   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:39.931317   80857 cri.go:89] found id: ""
	I0717 18:43:39.931341   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.931348   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:39.931354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:39.931410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:39.971571   80857 cri.go:89] found id: ""
	I0717 18:43:39.971615   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.971626   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:39.971634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:39.971692   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:40.003851   80857 cri.go:89] found id: ""
	I0717 18:43:40.003875   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.003883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:40.003891   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:40.003942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:40.040403   80857 cri.go:89] found id: ""
	I0717 18:43:40.040430   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.040440   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:40.040445   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:40.040498   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:39.669960   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.170056   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.325792   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.824332   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.596935   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.597153   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.071893   80857 cri.go:89] found id: ""
	I0717 18:43:40.071919   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.071927   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:40.071932   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:40.071979   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:40.111020   80857 cri.go:89] found id: ""
	I0717 18:43:40.111042   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.111052   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:40.111059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:40.111117   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:40.142872   80857 cri.go:89] found id: ""
	I0717 18:43:40.142899   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.142910   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:40.142917   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:40.142975   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:40.179919   80857 cri.go:89] found id: ""
	I0717 18:43:40.179944   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.179953   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:40.179963   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:40.179980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:40.233033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:40.233075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:40.246272   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:40.246299   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:40.311988   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:40.312014   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:40.312033   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:40.395622   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:40.395658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:42.935843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:42.949893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:42.949957   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:42.982429   80857 cri.go:89] found id: ""
	I0717 18:43:42.982451   80857 logs.go:276] 0 containers: []
	W0717 18:43:42.982459   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:42.982464   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:42.982512   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:43.018637   80857 cri.go:89] found id: ""
	I0717 18:43:43.018659   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.018666   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:43.018672   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:43.018719   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:43.054274   80857 cri.go:89] found id: ""
	I0717 18:43:43.054301   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.054310   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:43.054317   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:43.054368   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:43.093382   80857 cri.go:89] found id: ""
	I0717 18:43:43.093408   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.093418   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:43.093425   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:43.093484   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:43.125830   80857 cri.go:89] found id: ""
	I0717 18:43:43.125862   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.125871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:43.125878   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:43.125936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:43.157110   80857 cri.go:89] found id: ""
	I0717 18:43:43.157138   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.157147   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:43.157154   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:43.157215   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:43.188320   80857 cri.go:89] found id: ""
	I0717 18:43:43.188342   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.188349   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:43.188354   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:43.188400   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:43.220650   80857 cri.go:89] found id: ""
	I0717 18:43:43.220679   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.220686   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:43.220695   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:43.220707   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:43.259320   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:43.259358   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:43.308308   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:43.308346   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:43.321865   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:43.321894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:43.396110   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:43.396135   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:43.396147   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:44.670206   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.169748   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.323427   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.324066   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.096564   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.105605   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.976091   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:45.988956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:45.989015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:46.022277   80857 cri.go:89] found id: ""
	I0717 18:43:46.022307   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.022318   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:46.022325   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:46.022398   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:46.057607   80857 cri.go:89] found id: ""
	I0717 18:43:46.057636   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.057646   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:46.057653   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:46.057712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:46.089275   80857 cri.go:89] found id: ""
	I0717 18:43:46.089304   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.089313   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:46.089321   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:46.089378   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:46.123686   80857 cri.go:89] found id: ""
	I0717 18:43:46.123717   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.123726   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:46.123731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:46.123784   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:46.166600   80857 cri.go:89] found id: ""
	I0717 18:43:46.166628   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.166638   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:46.166645   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:46.166704   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:46.202518   80857 cri.go:89] found id: ""
	I0717 18:43:46.202543   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.202562   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:46.202568   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:46.202612   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:46.234573   80857 cri.go:89] found id: ""
	I0717 18:43:46.234608   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.234620   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:46.234627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:46.234687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:46.265305   80857 cri.go:89] found id: ""
	I0717 18:43:46.265333   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.265343   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:46.265355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:46.265369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:46.342963   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:46.342993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:46.377170   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:46.377208   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:46.429641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:46.429673   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:46.442168   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:46.442195   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:46.516656   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.016877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:49.030308   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:49.030375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:49.062400   80857 cri.go:89] found id: ""
	I0717 18:43:49.062423   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.062430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:49.062435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:49.062486   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:49.097110   80857 cri.go:89] found id: ""
	I0717 18:43:49.097131   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.097137   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:49.097142   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:49.097190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:49.128535   80857 cri.go:89] found id: ""
	I0717 18:43:49.128558   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.128571   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:49.128577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:49.128626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:49.162505   80857 cri.go:89] found id: ""
	I0717 18:43:49.162530   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.162538   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:49.162544   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:49.162594   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:49.194912   80857 cri.go:89] found id: ""
	I0717 18:43:49.194939   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.194950   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:49.194957   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:49.195025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:49.227055   80857 cri.go:89] found id: ""
	I0717 18:43:49.227083   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.227092   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:49.227098   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:49.227147   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:49.259568   80857 cri.go:89] found id: ""
	I0717 18:43:49.259596   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.259607   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:49.259618   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:49.259673   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:49.291700   80857 cri.go:89] found id: ""
	I0717 18:43:49.291727   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.291735   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:49.291744   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:49.291755   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:49.344600   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:49.344636   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:49.357680   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:49.357705   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:49.427160   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.427180   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:49.427192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:49.504151   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:49.504182   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:49.170632   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.170953   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:49.324205   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.823181   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:53.824989   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:49.596298   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.596383   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:54.097260   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:52.041591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:52.054775   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:52.054841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:52.085858   80857 cri.go:89] found id: ""
	I0717 18:43:52.085892   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.085904   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:52.085911   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:52.085961   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:52.124100   80857 cri.go:89] found id: ""
	I0717 18:43:52.124122   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.124130   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:52.124135   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:52.124195   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:52.155056   80857 cri.go:89] found id: ""
	I0717 18:43:52.155079   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.155087   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:52.155093   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:52.155154   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:52.189318   80857 cri.go:89] found id: ""
	I0717 18:43:52.189349   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.189359   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:52.189366   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:52.189430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:52.222960   80857 cri.go:89] found id: ""
	I0717 18:43:52.222988   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.222999   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:52.223006   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:52.223071   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:52.255807   80857 cri.go:89] found id: ""
	I0717 18:43:52.255834   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.255841   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:52.255847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:52.255904   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:52.286596   80857 cri.go:89] found id: ""
	I0717 18:43:52.286628   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.286641   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:52.286648   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:52.286703   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:52.319607   80857 cri.go:89] found id: ""
	I0717 18:43:52.319632   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.319641   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:52.319652   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:52.319666   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:52.371270   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:52.371301   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:52.384771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:52.384803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:52.456408   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:52.456432   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:52.456444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:52.533724   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:52.533759   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:53.171080   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:55.669642   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:56.324311   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:58.823693   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:56.595916   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:58.597526   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:55.072554   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:55.087005   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:55.087086   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:55.123300   80857 cri.go:89] found id: ""
	I0717 18:43:55.123325   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.123331   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:55.123336   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:55.123390   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:55.158476   80857 cri.go:89] found id: ""
	I0717 18:43:55.158502   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.158509   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:55.158515   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:55.158572   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:55.198489   80857 cri.go:89] found id: ""
	I0717 18:43:55.198511   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.198518   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:55.198524   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:55.198567   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:55.230901   80857 cri.go:89] found id: ""
	I0717 18:43:55.230933   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.230943   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:55.230951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:55.231028   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:55.262303   80857 cri.go:89] found id: ""
	I0717 18:43:55.262326   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.262333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:55.262340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:55.262393   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:55.293889   80857 cri.go:89] found id: ""
	I0717 18:43:55.293916   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.293925   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:55.293930   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:55.293983   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:55.325695   80857 cri.go:89] found id: ""
	I0717 18:43:55.325720   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.325727   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:55.325737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:55.325797   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:55.360021   80857 cri.go:89] found id: ""
	I0717 18:43:55.360044   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.360052   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:55.360059   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:55.360075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:55.372088   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:55.372111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:55.442073   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:55.442101   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:55.442116   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:55.521733   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:55.521763   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:55.558914   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:55.558947   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.114001   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:58.126283   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:58.126353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:58.162769   80857 cri.go:89] found id: ""
	I0717 18:43:58.162800   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.162810   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:58.162815   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:58.162862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:58.197359   80857 cri.go:89] found id: ""
	I0717 18:43:58.197386   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.197397   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:58.197404   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:58.197465   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:58.229662   80857 cri.go:89] found id: ""
	I0717 18:43:58.229691   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.229700   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:58.229707   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:58.229766   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:58.261810   80857 cri.go:89] found id: ""
	I0717 18:43:58.261832   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.261838   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:58.261844   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:58.261900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:58.293243   80857 cri.go:89] found id: ""
	I0717 18:43:58.293271   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.293282   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:58.293290   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:58.293353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:58.325689   80857 cri.go:89] found id: ""
	I0717 18:43:58.325714   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.325724   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:58.325731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:58.325785   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:58.357381   80857 cri.go:89] found id: ""
	I0717 18:43:58.357406   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.357416   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:58.357422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:58.357483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:58.389859   80857 cri.go:89] found id: ""
	I0717 18:43:58.389888   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.389900   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:58.389910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:58.389926   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:58.458034   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:58.458058   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:58.458072   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:58.536134   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:58.536164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:58.573808   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:58.573834   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.624956   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:58.624985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:58.170810   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:00.670184   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:02.671370   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:00.824682   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:02.824874   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:01.096294   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:03.096348   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:01.138486   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:01.151547   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:01.151610   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:01.186397   80857 cri.go:89] found id: ""
	I0717 18:44:01.186422   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.186430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:01.186435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:01.186487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:01.220797   80857 cri.go:89] found id: ""
	I0717 18:44:01.220822   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.220830   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:01.220849   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:01.220894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:01.257640   80857 cri.go:89] found id: ""
	I0717 18:44:01.257666   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.257674   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:01.257680   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:01.257727   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:01.295393   80857 cri.go:89] found id: ""
	I0717 18:44:01.295418   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.295425   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:01.295432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:01.295493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:01.327242   80857 cri.go:89] found id: ""
	I0717 18:44:01.327261   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.327268   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:01.327273   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:01.327319   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:01.358559   80857 cri.go:89] found id: ""
	I0717 18:44:01.358586   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.358593   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:01.358599   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:01.358647   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:01.392301   80857 cri.go:89] found id: ""
	I0717 18:44:01.392332   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.392341   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:01.392346   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:01.392407   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:01.424422   80857 cri.go:89] found id: ""
	I0717 18:44:01.424449   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.424457   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:01.424465   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:01.424477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:01.473298   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:01.473332   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:01.487444   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:01.487471   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:01.552548   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:01.552572   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:01.552586   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:01.634203   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:01.634242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:04.175618   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:04.188071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:04.188150   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:04.222149   80857 cri.go:89] found id: ""
	I0717 18:44:04.222173   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.222180   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:04.222185   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:04.222242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:04.257174   80857 cri.go:89] found id: ""
	I0717 18:44:04.257211   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.257223   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:04.257232   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:04.257284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:04.291628   80857 cri.go:89] found id: ""
	I0717 18:44:04.291653   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.291666   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:04.291673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:04.291733   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:04.325935   80857 cri.go:89] found id: ""
	I0717 18:44:04.325964   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.325975   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:04.325982   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:04.326043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:04.356610   80857 cri.go:89] found id: ""
	I0717 18:44:04.356638   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.356648   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:04.356655   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:04.356712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:04.387728   80857 cri.go:89] found id: ""
	I0717 18:44:04.387764   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.387773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:04.387782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:04.387840   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:04.421452   80857 cri.go:89] found id: ""
	I0717 18:44:04.421479   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.421488   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:04.421495   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:04.421555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:04.453111   80857 cri.go:89] found id: ""
	I0717 18:44:04.453139   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.453150   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:04.453161   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:04.453175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:04.506185   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:04.506215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:04.523611   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:04.523638   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:04.591051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:04.591074   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:04.591091   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:04.666603   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:04.666647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:05.169836   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.170112   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:05.324886   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.325488   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:05.096545   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.598131   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.205208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:07.218182   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:07.218236   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:07.254521   80857 cri.go:89] found id: ""
	I0717 18:44:07.254554   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.254565   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:07.254571   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:07.254638   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:07.293622   80857 cri.go:89] found id: ""
	I0717 18:44:07.293650   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.293658   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:07.293663   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:07.293711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:07.331056   80857 cri.go:89] found id: ""
	I0717 18:44:07.331083   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.331091   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:07.331097   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:07.331157   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:07.368445   80857 cri.go:89] found id: ""
	I0717 18:44:07.368476   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.368484   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:07.368491   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:07.368541   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:07.405507   80857 cri.go:89] found id: ""
	I0717 18:44:07.405539   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.405550   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:07.405557   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:07.405617   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:07.444752   80857 cri.go:89] found id: ""
	I0717 18:44:07.444782   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.444792   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:07.444801   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:07.444859   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:07.486976   80857 cri.go:89] found id: ""
	I0717 18:44:07.487006   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.487016   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:07.487024   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:07.487073   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:07.522561   80857 cri.go:89] found id: ""
	I0717 18:44:07.522590   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.522599   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:07.522607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:07.522618   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:07.576350   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:07.576382   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:07.591491   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:07.591517   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:07.659860   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:07.659886   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:07.659902   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:07.743445   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:07.743478   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:09.170601   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:11.170851   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:09.824120   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:11.826838   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:10.097009   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:12.596778   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:10.284468   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:10.296549   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:10.296608   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:10.331209   80857 cri.go:89] found id: ""
	I0717 18:44:10.331236   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.331246   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:10.331252   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:10.331297   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:10.363911   80857 cri.go:89] found id: ""
	I0717 18:44:10.363941   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.363949   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:10.363954   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:10.364001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:10.395935   80857 cri.go:89] found id: ""
	I0717 18:44:10.395960   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.395970   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:10.395977   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:10.396021   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:10.428307   80857 cri.go:89] found id: ""
	I0717 18:44:10.428337   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.428344   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:10.428351   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:10.428397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:10.459615   80857 cri.go:89] found id: ""
	I0717 18:44:10.459643   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.459654   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:10.459661   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:10.459715   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:10.491593   80857 cri.go:89] found id: ""
	I0717 18:44:10.491617   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.491628   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:10.491636   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:10.491693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:10.526822   80857 cri.go:89] found id: ""
	I0717 18:44:10.526846   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.526853   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:10.526858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:10.526918   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:10.561037   80857 cri.go:89] found id: ""
	I0717 18:44:10.561066   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.561077   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:10.561087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:10.561101   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:10.643333   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:10.643364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:10.684673   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:10.684704   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:10.736191   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:10.736220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:10.748762   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:10.748793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:10.812121   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.313033   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:13.325692   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:13.325756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:13.358306   80857 cri.go:89] found id: ""
	I0717 18:44:13.358336   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.358345   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:13.358352   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:13.358410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:13.393233   80857 cri.go:89] found id: ""
	I0717 18:44:13.393264   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.393274   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:13.393282   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:13.393340   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:13.424256   80857 cri.go:89] found id: ""
	I0717 18:44:13.424287   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.424298   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:13.424305   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:13.424358   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:13.454988   80857 cri.go:89] found id: ""
	I0717 18:44:13.455010   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.455018   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:13.455023   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:13.455069   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:13.491019   80857 cri.go:89] found id: ""
	I0717 18:44:13.491046   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.491054   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:13.491060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:13.491107   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:13.523045   80857 cri.go:89] found id: ""
	I0717 18:44:13.523070   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.523079   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:13.523085   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:13.523131   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:13.555442   80857 cri.go:89] found id: ""
	I0717 18:44:13.555470   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.555483   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:13.555489   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:13.555549   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:13.588891   80857 cri.go:89] found id: ""
	I0717 18:44:13.588921   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.588931   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:13.588958   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:13.588973   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:13.663635   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.663659   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:13.663674   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:13.749098   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:13.749135   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:13.785489   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:13.785524   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:13.837098   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:13.837128   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:13.671215   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:15.671282   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:17.671466   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:14.324573   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:16.826063   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:15.095967   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:17.096403   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:19.096478   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:16.350571   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:16.364398   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:16.364470   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:16.400677   80857 cri.go:89] found id: ""
	I0717 18:44:16.400708   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.400719   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:16.400726   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:16.400781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:16.431715   80857 cri.go:89] found id: ""
	I0717 18:44:16.431743   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.431754   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:16.431760   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:16.431836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:16.465115   80857 cri.go:89] found id: ""
	I0717 18:44:16.465148   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.465160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:16.465167   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:16.465230   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:16.497906   80857 cri.go:89] found id: ""
	I0717 18:44:16.497933   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.497944   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:16.497952   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:16.498008   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:16.534066   80857 cri.go:89] found id: ""
	I0717 18:44:16.534097   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.534108   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:16.534116   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:16.534173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:16.566679   80857 cri.go:89] found id: ""
	I0717 18:44:16.566706   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.566717   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:16.566724   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:16.566781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:16.598397   80857 cri.go:89] found id: ""
	I0717 18:44:16.598416   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.598422   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:16.598427   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:16.598480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:16.629943   80857 cri.go:89] found id: ""
	I0717 18:44:16.629975   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.629998   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:16.630017   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:16.630032   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:16.706452   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:16.706489   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:16.744971   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:16.745003   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:16.796450   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:16.796477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:16.809192   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:16.809217   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:16.875699   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.376821   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:19.389921   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:19.389980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:19.423837   80857 cri.go:89] found id: ""
	I0717 18:44:19.423862   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.423870   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:19.423877   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:19.423934   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:19.468267   80857 cri.go:89] found id: ""
	I0717 18:44:19.468293   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.468305   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:19.468311   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:19.468371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:19.503286   80857 cri.go:89] found id: ""
	I0717 18:44:19.503315   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.503326   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:19.503333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:19.503391   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:19.535505   80857 cri.go:89] found id: ""
	I0717 18:44:19.535531   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.535542   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:19.535548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:19.535607   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:19.568678   80857 cri.go:89] found id: ""
	I0717 18:44:19.568704   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.568711   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:19.568717   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:19.568762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:19.604027   80857 cri.go:89] found id: ""
	I0717 18:44:19.604053   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.604064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:19.604071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:19.604127   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:19.637357   80857 cri.go:89] found id: ""
	I0717 18:44:19.637387   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.637397   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:19.637403   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:19.637450   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:19.669094   80857 cri.go:89] found id: ""
	I0717 18:44:19.669126   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.669136   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:19.669145   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:19.669160   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:19.720218   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:19.720248   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:19.733320   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:19.733343   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:19.796229   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.796252   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:19.796267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:19.871157   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:19.871186   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:20.170824   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:22.670239   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:19.324037   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:21.324408   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:23.824030   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:21.098734   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:23.595859   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:22.409012   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:22.421477   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:22.421546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:22.457314   80857 cri.go:89] found id: ""
	I0717 18:44:22.457337   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.457346   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:22.457354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:22.457410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:22.490998   80857 cri.go:89] found id: ""
	I0717 18:44:22.491022   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.491030   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:22.491037   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:22.491090   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:22.523904   80857 cri.go:89] found id: ""
	I0717 18:44:22.523934   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.523945   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:22.523953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:22.524012   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:22.555917   80857 cri.go:89] found id: ""
	I0717 18:44:22.555947   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.555956   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:22.555962   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:22.556026   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:22.588510   80857 cri.go:89] found id: ""
	I0717 18:44:22.588552   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.588565   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:22.588574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:22.588652   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:22.621854   80857 cri.go:89] found id: ""
	I0717 18:44:22.621883   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.621893   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:22.621901   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:22.621956   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:22.653897   80857 cri.go:89] found id: ""
	I0717 18:44:22.653921   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.653931   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:22.653938   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:22.654001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:22.685731   80857 cri.go:89] found id: ""
	I0717 18:44:22.685760   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.685770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:22.685779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:22.685792   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:22.735514   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:22.735545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:22.748148   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:22.748169   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:22.809637   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:22.809666   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:22.809682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:22.886014   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:22.886050   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:24.670825   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:27.169930   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.824694   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:28.324620   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.597423   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:28.095788   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.431906   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:25.444866   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:25.444965   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:25.477211   80857 cri.go:89] found id: ""
	I0717 18:44:25.477245   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.477257   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:25.477264   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:25.477366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:25.512077   80857 cri.go:89] found id: ""
	I0717 18:44:25.512108   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.512120   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:25.512127   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:25.512177   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:25.543953   80857 cri.go:89] found id: ""
	I0717 18:44:25.543974   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.543981   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:25.543987   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:25.544032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:25.574955   80857 cri.go:89] found id: ""
	I0717 18:44:25.574980   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.574990   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:25.574997   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:25.575054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:25.607078   80857 cri.go:89] found id: ""
	I0717 18:44:25.607106   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.607117   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:25.607125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:25.607188   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:25.643129   80857 cri.go:89] found id: ""
	I0717 18:44:25.643152   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.643162   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:25.643169   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:25.643225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:25.678220   80857 cri.go:89] found id: ""
	I0717 18:44:25.678241   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.678249   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:25.678254   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:25.678309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:25.715405   80857 cri.go:89] found id: ""
	I0717 18:44:25.715433   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.715446   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:25.715458   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:25.715474   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:25.772978   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:25.773008   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:25.786559   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:25.786587   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:25.853369   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:25.853386   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:25.853398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:25.954346   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:25.954398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:28.498591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:28.511701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:28.511762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:28.543527   80857 cri.go:89] found id: ""
	I0717 18:44:28.543551   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.543559   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:28.543565   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:28.543624   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:28.574737   80857 cri.go:89] found id: ""
	I0717 18:44:28.574762   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.574769   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:28.574776   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:28.574835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:28.608129   80857 cri.go:89] found id: ""
	I0717 18:44:28.608166   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.608174   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:28.608179   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:28.608234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:28.644324   80857 cri.go:89] found id: ""
	I0717 18:44:28.644348   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.644357   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:28.644371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:28.644426   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:28.675830   80857 cri.go:89] found id: ""
	I0717 18:44:28.675859   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.675870   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:28.675877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:28.675937   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:28.705713   80857 cri.go:89] found id: ""
	I0717 18:44:28.705749   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.705760   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:28.705768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:28.705821   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:28.738648   80857 cri.go:89] found id: ""
	I0717 18:44:28.738677   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.738688   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:28.738695   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:28.738752   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:28.768877   80857 cri.go:89] found id: ""
	I0717 18:44:28.768906   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.768916   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:28.768927   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:28.768953   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:28.818951   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:28.818985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:28.832813   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:28.832843   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:28.910030   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:28.910051   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:28.910063   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:28.986706   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:28.986743   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:29.170559   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:31.669543   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:30.824906   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:33.324261   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:30.096916   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:32.597522   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:31.529154   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:31.543261   80857 kubeadm.go:597] duration metric: took 4m4.346231712s to restartPrimaryControlPlane
	W0717 18:44:31.543327   80857 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:44:31.543350   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:44:33.670602   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:36.169669   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:35.325082   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:37.824371   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:35.096445   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:37.097375   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:39.098005   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:36.752008   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.208633612s)
	I0717 18:44:36.752076   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:44:36.765411   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:44:36.774556   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:44:36.783406   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:44:36.783427   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:44:36.783479   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:44:36.791953   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:44:36.792007   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:44:36.800929   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:44:36.808988   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:44:36.809049   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:44:36.817312   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.825586   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:44:36.825648   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.834783   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:44:36.843109   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:44:36.843166   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:44:36.852276   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:44:37.058251   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:44:38.170695   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.671193   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.324181   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.818959   80401 pod_ready.go:81] duration metric: took 4m0.000961975s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" ...
	E0717 18:44:40.818998   80401 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:44:40.819017   80401 pod_ready.go:38] duration metric: took 4m12.045669741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:44:40.819042   80401 kubeadm.go:597] duration metric: took 4m22.276381575s to restartPrimaryControlPlane
	W0717 18:44:40.819091   80401 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:44:40.819116   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:44:41.597013   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:44.097096   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:43.170145   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:45.670626   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:46.595570   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:48.598459   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:48.169822   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:50.170686   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:52.670255   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:51.097591   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:53.597467   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:55.170853   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:57.670157   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:56.096506   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:58.107493   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:00.170210   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:02.672286   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:00.596747   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:02.590517   81068 pod_ready.go:81] duration metric: took 4m0.000120095s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" ...
	E0717 18:45:02.590549   81068 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:45:02.590572   81068 pod_ready.go:38] duration metric: took 4m10.536894511s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:02.590607   81068 kubeadm.go:597] duration metric: took 4m18.045314131s to restartPrimaryControlPlane
	W0717 18:45:02.590672   81068 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:45:02.590702   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:45:06.920900   80401 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.10175503s)
	I0717 18:45:06.921009   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:06.952090   80401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:06.962820   80401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:06.979545   80401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:06.979577   80401 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:06.979641   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:45:06.990493   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:06.990574   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:07.014934   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:45:07.024381   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:07.024449   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:07.033573   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:45:07.042495   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:07.042552   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:07.051233   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:45:07.059616   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:07.059674   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:07.068348   80401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:07.112042   80401 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 18:45:07.112188   80401 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:45:07.229262   80401 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:45:07.229356   80401 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:45:07.229491   80401 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 18:45:07.239251   80401 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:45:05.171753   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:07.669753   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:07.241949   80401 out.go:204]   - Generating certificates and keys ...
	I0717 18:45:07.242054   80401 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:45:07.242150   80401 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:45:07.242253   80401 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:45:07.242355   80401 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:45:07.242459   80401 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:45:07.242536   80401 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:45:07.242620   80401 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:45:07.242721   80401 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:45:07.242835   80401 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:45:07.242937   80401 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:45:07.242998   80401 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:45:07.243068   80401 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:45:07.641462   80401 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:45:07.705768   80401 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:45:07.821102   80401 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:45:07.898702   80401 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:45:08.107470   80401 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:45:08.107945   80401 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:45:08.111615   80401 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:45:08.113464   80401 out.go:204]   - Booting up control plane ...
	I0717 18:45:08.113572   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:45:08.113695   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:45:08.113843   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:45:08.131411   80401 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:45:08.137563   80401 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:45:08.137622   80401 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:45:08.268403   80401 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:45:08.268519   80401 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:45:08.769158   80401 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.386396ms
	I0717 18:45:08.769265   80401 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:45:09.669968   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:11.670466   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:13.771873   80401 kubeadm.go:310] [api-check] The API server is healthy after 5.002458706s
	I0717 18:45:13.789581   80401 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:45:13.804268   80401 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:45:13.831438   80401 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:45:13.831641   80401 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-066175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:45:13.845165   80401 kubeadm.go:310] [bootstrap-token] Using token: fscs12.0o2n9pl0vxdw75m1
	I0717 18:45:13.846851   80401 out.go:204]   - Configuring RBAC rules ...
	I0717 18:45:13.847002   80401 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:45:13.854788   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:45:13.866828   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:45:13.871541   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:45:13.875508   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:45:13.880068   80401 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:45:14.179824   80401 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:45:14.669946   80401 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:45:15.180053   80401 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:45:15.180076   80401 kubeadm.go:310] 
	I0717 18:45:15.180180   80401 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:45:15.180201   80401 kubeadm.go:310] 
	I0717 18:45:15.180287   80401 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:45:15.180295   80401 kubeadm.go:310] 
	I0717 18:45:15.180348   80401 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:45:15.180437   80401 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:45:15.180517   80401 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:45:15.180530   80401 kubeadm.go:310] 
	I0717 18:45:15.180607   80401 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:45:15.180617   80401 kubeadm.go:310] 
	I0717 18:45:15.180682   80401 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:45:15.180692   80401 kubeadm.go:310] 
	I0717 18:45:15.180775   80401 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:45:15.180871   80401 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:45:15.180984   80401 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:45:15.180996   80401 kubeadm.go:310] 
	I0717 18:45:15.181107   80401 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:45:15.181221   80401 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:45:15.181234   80401 kubeadm.go:310] 
	I0717 18:45:15.181370   80401 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fscs12.0o2n9pl0vxdw75m1 \
	I0717 18:45:15.181523   80401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:45:15.181571   80401 kubeadm.go:310] 	--control-plane 
	I0717 18:45:15.181579   80401 kubeadm.go:310] 
	I0717 18:45:15.181679   80401 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:45:15.181690   80401 kubeadm.go:310] 
	I0717 18:45:15.181802   80401 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fscs12.0o2n9pl0vxdw75m1 \
	I0717 18:45:15.181954   80401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:45:15.182460   80401 kubeadm.go:310] W0717 18:45:07.084606    2905 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:45:15.182848   80401 kubeadm.go:310] W0717 18:45:07.085710    2905 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:45:15.183017   80401 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:15.183038   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:45:15.183048   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:45:15.185022   80401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:45:13.671267   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:15.671682   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:15.186444   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:45:15.197514   80401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:45:15.216000   80401 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:45:15.216097   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:15.216157   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-066175 minikube.k8s.io/updated_at=2024_07_17T18_45_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=no-preload-066175 minikube.k8s.io/primary=true
	I0717 18:45:15.251049   80401 ops.go:34] apiserver oom_adj: -16
	I0717 18:45:15.383234   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:15.884265   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:16.384075   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:16.883375   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:17.383864   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:17.884072   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:18.383283   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:18.883644   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:19.384366   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:19.507413   80401 kubeadm.go:1113] duration metric: took 4.291369352s to wait for elevateKubeSystemPrivileges
	I0717 18:45:19.507450   80401 kubeadm.go:394] duration metric: took 5m1.019320853s to StartCluster
	I0717 18:45:19.507473   80401 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:19.507570   80401 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:45:19.510004   80401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:19.510329   80401 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:45:19.510401   80401 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:45:19.510484   80401 addons.go:69] Setting storage-provisioner=true in profile "no-preload-066175"
	I0717 18:45:19.510515   80401 addons.go:234] Setting addon storage-provisioner=true in "no-preload-066175"
	W0717 18:45:19.510523   80401 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:45:19.510530   80401 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:45:19.510531   80401 addons.go:69] Setting default-storageclass=true in profile "no-preload-066175"
	I0717 18:45:19.510553   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.510551   80401 addons.go:69] Setting metrics-server=true in profile "no-preload-066175"
	I0717 18:45:19.510572   80401 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-066175"
	I0717 18:45:19.510586   80401 addons.go:234] Setting addon metrics-server=true in "no-preload-066175"
	W0717 18:45:19.510596   80401 addons.go:243] addon metrics-server should already be in state true
	I0717 18:45:19.510628   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.510986   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.510986   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.511027   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.511047   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.511075   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.511102   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.512057   80401 out.go:177] * Verifying Kubernetes components...
	I0717 18:45:19.513662   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:45:19.532038   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40719
	I0717 18:45:19.532059   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45825
	I0717 18:45:19.532048   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0717 18:45:19.532557   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.532562   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.532701   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.533086   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533107   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533246   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533261   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533276   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533295   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533455   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533671   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533732   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533851   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.533933   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.533958   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.534280   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.534310   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.537749   80401 addons.go:234] Setting addon default-storageclass=true in "no-preload-066175"
	W0717 18:45:19.537773   80401 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:45:19.537804   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.538168   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.538206   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.550488   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I0717 18:45:19.551013   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.551625   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.551647   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.552005   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.552335   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.553613   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I0717 18:45:19.553633   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0717 18:45:19.554184   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.554243   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.554271   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.554784   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.554801   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.554965   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.554986   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.555220   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.555350   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.555393   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.555995   80401 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:45:19.556103   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.556229   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.556825   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.557482   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:45:19.557499   80401 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:45:19.557517   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.558437   80401 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:45:19.560069   80401 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:19.560084   80401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:45:19.560100   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.560881   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.560908   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.560932   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.561265   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.561477   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.561633   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.561732   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.563601   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.564025   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.564197   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.564219   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.564378   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.564549   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.564686   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.579324   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37271
	I0717 18:45:19.579786   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.580331   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.580354   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.580697   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.580925   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.582700   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.582910   80401 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:19.582923   80401 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:45:19.582936   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.585938   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.586387   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.586414   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.586605   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.586758   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.586920   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.587061   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.706369   80401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:45:19.727936   80401 node_ready.go:35] waiting up to 6m0s for node "no-preload-066175" to be "Ready" ...
	I0717 18:45:19.738822   80401 node_ready.go:49] node "no-preload-066175" has status "Ready":"True"
	I0717 18:45:19.738841   80401 node_ready.go:38] duration metric: took 10.872501ms for node "no-preload-066175" to be "Ready" ...
	I0717 18:45:19.738852   80401 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:19.744979   80401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:19.854180   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:19.873723   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:45:19.873746   80401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:45:19.883867   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:19.902041   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:45:19.902064   80401 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:45:19.926788   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:19.926867   80401 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:45:19.953788   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:20.571091   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.571119   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571119   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.571137   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571394   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.571439   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.571456   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.571463   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.571459   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.572575   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571494   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.572789   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.572761   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.572804   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.572815   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.572824   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.573027   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.573044   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.589595   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.589614   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.589913   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.589940   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.589918   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.789754   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.789776   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.790082   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.790103   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.790113   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.790123   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.790416   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.790457   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.790470   80401 addons.go:475] Verifying addon metrics-server=true in "no-preload-066175"
	I0717 18:45:20.790416   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.792175   80401 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 18:45:18.169876   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:20.170261   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:22.664656   80180 pod_ready.go:81] duration metric: took 4m0.000669682s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" ...
	E0717 18:45:22.664696   80180 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:45:22.664716   80180 pod_ready.go:38] duration metric: took 4m9.027997903s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:22.664746   80180 kubeadm.go:597] duration metric: took 4m19.955287366s to restartPrimaryControlPlane
	W0717 18:45:22.664823   80180 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:45:22.664854   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:45:20.793543   80401 addons.go:510] duration metric: took 1.283145408s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 18:45:21.766367   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:24.252243   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:24.771415   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:24.771443   80401 pod_ready.go:81] duration metric: took 5.026437249s for pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:24.771457   80401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:26.777371   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:28.778629   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:31.277550   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:31.792126   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.792154   80401 pod_ready.go:81] duration metric: took 7.020687724s for pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.792168   80401 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.798687   80401 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.798708   80401 pod_ready.go:81] duration metric: took 6.534344ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.798717   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.803428   80401 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.803452   80401 pod_ready.go:81] duration metric: took 4.727536ms for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.803464   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.815053   80401 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.815078   80401 pod_ready.go:81] duration metric: took 11.60679ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.815092   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rgp5c" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.824126   80401 pod_ready.go:92] pod "kube-proxy-rgp5c" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.824151   80401 pod_ready.go:81] duration metric: took 9.050394ms for pod "kube-proxy-rgp5c" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.824163   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:32.176378   80401 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:32.176404   80401 pod_ready.go:81] duration metric: took 352.232802ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:32.176414   80401 pod_ready.go:38] duration metric: took 12.437548785s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:32.176430   80401 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:45:32.176492   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:45:32.190918   80401 api_server.go:72] duration metric: took 12.680546008s to wait for apiserver process to appear ...
	I0717 18:45:32.190942   80401 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:45:32.190963   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:45:32.196011   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:45:32.197004   80401 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:45:32.197024   80401 api_server.go:131] duration metric: took 6.075734ms to wait for apiserver health ...
	I0717 18:45:32.197033   80401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:45:32.379383   80401 system_pods.go:59] 9 kube-system pods found
	I0717 18:45:32.379412   80401 system_pods.go:61] "coredns-5cfdc65f69-r9xns" [29624b73-848d-4a35-96bc-92f9627842fe] Running
	I0717 18:45:32.379416   80401 system_pods.go:61] "coredns-5cfdc65f69-tx7nc" [085ec394-1ca7-4b9b-9b54-b4fdab45bd75] Running
	I0717 18:45:32.379420   80401 system_pods.go:61] "etcd-no-preload-066175" [6086cbd0-137f-428e-8131-4d57b8823912] Running
	I0717 18:45:32.379423   80401 system_pods.go:61] "kube-apiserver-no-preload-066175" [c1913fea-3c1b-4563-ac80-ee1224b23a35] Running
	I0717 18:45:32.379427   80401 system_pods.go:61] "kube-controller-manager-no-preload-066175" [f6dd2ea0-be8f-4c8c-89b0-57fed0d618fd] Running
	I0717 18:45:32.379431   80401 system_pods.go:61] "kube-proxy-rgp5c" [7aaedb8f-b248-43ac-bd49-4f97d26aa1f6] Running
	I0717 18:45:32.379433   80401 system_pods.go:61] "kube-scheduler-no-preload-066175" [406fae53-d382-42c0-90db-ff9c57ccda8b] Running
	I0717 18:45:32.379439   80401 system_pods.go:61] "metrics-server-78fcd8795b-kj29z" [4b99bc9f-b5a7-4e86-b3ba-2607f9840957] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:32.379442   80401 system_pods.go:61] "storage-provisioner" [c9730cf9-c0f1-4afc-94cc-cbd825158d7c] Running
	I0717 18:45:32.379450   80401 system_pods.go:74] duration metric: took 182.412193ms to wait for pod list to return data ...
	I0717 18:45:32.379456   80401 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:45:32.576324   80401 default_sa.go:45] found service account: "default"
	I0717 18:45:32.576348   80401 default_sa.go:55] duration metric: took 196.886306ms for default service account to be created ...
	I0717 18:45:32.576357   80401 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:45:32.780237   80401 system_pods.go:86] 9 kube-system pods found
	I0717 18:45:32.780266   80401 system_pods.go:89] "coredns-5cfdc65f69-r9xns" [29624b73-848d-4a35-96bc-92f9627842fe] Running
	I0717 18:45:32.780272   80401 system_pods.go:89] "coredns-5cfdc65f69-tx7nc" [085ec394-1ca7-4b9b-9b54-b4fdab45bd75] Running
	I0717 18:45:32.780276   80401 system_pods.go:89] "etcd-no-preload-066175" [6086cbd0-137f-428e-8131-4d57b8823912] Running
	I0717 18:45:32.780280   80401 system_pods.go:89] "kube-apiserver-no-preload-066175" [c1913fea-3c1b-4563-ac80-ee1224b23a35] Running
	I0717 18:45:32.780284   80401 system_pods.go:89] "kube-controller-manager-no-preload-066175" [f6dd2ea0-be8f-4c8c-89b0-57fed0d618fd] Running
	I0717 18:45:32.780288   80401 system_pods.go:89] "kube-proxy-rgp5c" [7aaedb8f-b248-43ac-bd49-4f97d26aa1f6] Running
	I0717 18:45:32.780291   80401 system_pods.go:89] "kube-scheduler-no-preload-066175" [406fae53-d382-42c0-90db-ff9c57ccda8b] Running
	I0717 18:45:32.780298   80401 system_pods.go:89] "metrics-server-78fcd8795b-kj29z" [4b99bc9f-b5a7-4e86-b3ba-2607f9840957] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:32.780302   80401 system_pods.go:89] "storage-provisioner" [c9730cf9-c0f1-4afc-94cc-cbd825158d7c] Running
	I0717 18:45:32.780314   80401 system_pods.go:126] duration metric: took 203.948509ms to wait for k8s-apps to be running ...
	I0717 18:45:32.780323   80401 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:45:32.780368   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:32.796763   80401 system_svc.go:56] duration metric: took 16.430293ms WaitForService to wait for kubelet
	I0717 18:45:32.796791   80401 kubeadm.go:582] duration metric: took 13.286425468s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:45:32.796809   80401 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:45:32.977271   80401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:45:32.977295   80401 node_conditions.go:123] node cpu capacity is 2
	I0717 18:45:32.977305   80401 node_conditions.go:105] duration metric: took 180.491938ms to run NodePressure ...
	I0717 18:45:32.977315   80401 start.go:241] waiting for startup goroutines ...
	I0717 18:45:32.977322   80401 start.go:246] waiting for cluster config update ...
	I0717 18:45:32.977331   80401 start.go:255] writing updated cluster config ...
	I0717 18:45:32.977544   80401 ssh_runner.go:195] Run: rm -f paused
	I0717 18:45:33.022678   80401 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 18:45:33.024737   80401 out.go:177] * Done! kubectl is now configured to use "no-preload-066175" cluster and "default" namespace by default
	I0717 18:45:33.625503   81068 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.034773328s)
	I0717 18:45:33.625584   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:33.640151   81068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:33.650198   81068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:33.659027   81068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:33.659048   81068 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:33.659088   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 18:45:33.667607   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:33.667663   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:33.677632   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 18:45:33.685631   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:33.685683   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:33.694068   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 18:45:33.702840   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:33.702894   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:33.711560   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 18:45:33.719883   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:33.719928   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:33.729898   81068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:33.781672   81068 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:45:33.781776   81068 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:45:33.908046   81068 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:45:33.908199   81068 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:45:33.908366   81068 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:45:34.103926   81068 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:45:34.105872   81068 out.go:204]   - Generating certificates and keys ...
	I0717 18:45:34.105979   81068 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:45:34.106063   81068 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:45:34.106183   81068 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:45:34.106425   81068 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:45:34.106542   81068 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:45:34.106624   81068 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:45:34.106729   81068 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:45:34.106827   81068 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:45:34.106901   81068 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:45:34.106984   81068 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:45:34.107046   81068 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:45:34.107142   81068 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:45:34.390326   81068 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:45:34.442610   81068 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:45:34.692719   81068 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:45:34.777644   81068 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:45:35.101349   81068 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:45:35.102039   81068 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:45:35.104892   81068 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:45:35.106561   81068 out.go:204]   - Booting up control plane ...
	I0717 18:45:35.106689   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:45:35.106775   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:45:35.107611   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:45:35.126132   81068 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:45:35.127180   81068 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:45:35.127245   81068 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:45:35.250173   81068 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:45:35.250284   81068 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:45:35.752731   81068 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.583425ms
	I0717 18:45:35.752861   81068 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:45:40.754304   81068 kubeadm.go:310] [api-check] The API server is healthy after 5.001385597s
	I0717 18:45:40.766072   81068 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:45:40.785708   81068 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:45:40.816360   81068 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:45:40.816576   81068 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-022930 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:45:40.830588   81068 kubeadm.go:310] [bootstrap-token] Using token: kxmxsp.4wnt2q9oqhdfdirj
	I0717 18:45:40.831905   81068 out.go:204]   - Configuring RBAC rules ...
	I0717 18:45:40.832031   81068 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:45:40.840754   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:45:40.850104   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:45:40.853748   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:45:40.857341   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:45:40.860783   81068 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:45:41.161978   81068 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:45:41.600410   81068 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:45:42.161763   81068 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:45:42.163450   81068 kubeadm.go:310] 
	I0717 18:45:42.163541   81068 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:45:42.163558   81068 kubeadm.go:310] 
	I0717 18:45:42.163661   81068 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:45:42.163673   81068 kubeadm.go:310] 
	I0717 18:45:42.163707   81068 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:45:42.163797   81068 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:45:42.163870   81068 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:45:42.163881   81068 kubeadm.go:310] 
	I0717 18:45:42.163974   81068 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:45:42.163990   81068 kubeadm.go:310] 
	I0717 18:45:42.164058   81068 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:45:42.164077   81068 kubeadm.go:310] 
	I0717 18:45:42.164151   81068 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:45:42.164256   81068 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:45:42.164367   81068 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:45:42.164377   81068 kubeadm.go:310] 
	I0717 18:45:42.164489   81068 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:45:42.164588   81068 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:45:42.164595   81068 kubeadm.go:310] 
	I0717 18:45:42.164683   81068 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token kxmxsp.4wnt2q9oqhdfdirj \
	I0717 18:45:42.164826   81068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:45:42.164862   81068 kubeadm.go:310] 	--control-plane 
	I0717 18:45:42.164870   81068 kubeadm.go:310] 
	I0717 18:45:42.165002   81068 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:45:42.165012   81068 kubeadm.go:310] 
	I0717 18:45:42.165143   81068 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token kxmxsp.4wnt2q9oqhdfdirj \
	I0717 18:45:42.165257   81068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:45:42.166381   81068 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:42.166436   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:45:42.166456   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:45:42.168387   81068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:45:42.169678   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:45:42.180065   81068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:45:42.197116   81068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:45:42.197192   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:42.197217   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-022930 minikube.k8s.io/updated_at=2024_07_17T18_45_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=default-k8s-diff-port-022930 minikube.k8s.io/primary=true
	I0717 18:45:42.216456   81068 ops.go:34] apiserver oom_adj: -16
	I0717 18:45:42.370148   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:42.870732   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:43.370980   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:43.871201   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:44.370616   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:44.871007   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:45.370377   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:45.870614   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:46.370555   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:46.870513   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:47.370594   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:47.870651   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:48.370620   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:48.870863   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:49.371058   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:49.870188   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:50.370949   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:50.871187   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:51.370764   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:51.871007   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:52.370298   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:52.870917   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:53.371193   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:53.870491   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:54.370274   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:54.871160   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.370879   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.870592   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.948131   81068 kubeadm.go:1113] duration metric: took 13.751000929s to wait for elevateKubeSystemPrivileges
	I0717 18:45:55.948166   81068 kubeadm.go:394] duration metric: took 5m11.453950834s to StartCluster
	I0717 18:45:55.948188   81068 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:55.948265   81068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:45:55.950777   81068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:55.951066   81068 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:45:55.951134   81068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:45:55.951202   81068 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951237   81068 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.951247   81068 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:45:55.951243   81068 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951257   81068 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951293   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:45:55.951300   81068 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.951318   81068 addons.go:243] addon metrics-server should already be in state true
	I0717 18:45:55.951319   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.951348   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.951292   81068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-022930"
	I0717 18:45:55.951712   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951732   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951744   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951754   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.951769   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.951747   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.952885   81068 out.go:177] * Verifying Kubernetes components...
	I0717 18:45:55.954423   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:45:55.968158   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0717 18:45:55.968547   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41199
	I0717 18:45:55.968768   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.968917   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.969414   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.969436   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.969548   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.969566   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.969814   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.970012   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.970235   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:55.970413   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.970462   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.970809   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44281
	I0717 18:45:55.971165   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.974130   81068 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.974155   81068 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:45:55.974184   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.974549   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.974578   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.981608   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.981640   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.982054   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.982711   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.982754   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.990665   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0717 18:45:55.991297   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.991922   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.991938   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.992213   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.992346   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:55.993952   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:55.996135   81068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:45:55.997555   81068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:55.997579   81068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:45:55.997602   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:55.998414   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0717 18:45:55.998963   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.999540   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.999554   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.000799   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I0717 18:45:56.001014   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.001096   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.001419   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:56.001512   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.001527   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.001755   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.001929   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.002102   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.002141   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:56.002178   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:56.002255   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.002686   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:56.002709   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.003047   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.003251   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:56.004660   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:56.006355   81068 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:45:56.007646   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:45:56.007663   81068 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:45:56.007678   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:56.010711   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.011169   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.011220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.011452   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.011637   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.011806   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.011932   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.021277   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0717 18:45:56.021980   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:56.022568   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:56.022585   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.022949   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.023127   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:56.025023   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:56.025443   81068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:56.025458   81068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:45:56.025476   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:56.028095   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.028450   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.028477   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.028666   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.028853   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.029081   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.029226   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.173482   81068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:45:56.194585   81068 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-022930" to be "Ready" ...
	I0717 18:45:56.203594   81068 node_ready.go:49] node "default-k8s-diff-port-022930" has status "Ready":"True"
	I0717 18:45:56.203614   81068 node_ready.go:38] duration metric: took 8.994875ms for node "default-k8s-diff-port-022930" to be "Ready" ...
	I0717 18:45:56.203623   81068 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:56.207834   81068 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.212424   81068 pod_ready.go:92] pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.212444   81068 pod_ready.go:81] duration metric: took 4.58857ms for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.212454   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.217013   81068 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.217031   81068 pod_ready.go:81] duration metric: took 4.569971ms for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.217040   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.221441   81068 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.221458   81068 pod_ready.go:81] duration metric: took 4.411121ms for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.221470   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hnb5v" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.268740   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:45:56.268765   81068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:45:56.290194   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:56.310957   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:45:56.310981   81068 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:45:56.352789   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:56.352821   81068 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:45:56.378402   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:56.379632   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:56.518737   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.518766   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.519075   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.519097   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:56.519108   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.519117   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.519340   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.519352   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.519383   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.519426   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:56.529290   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.529317   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.529618   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.529680   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.529697   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.386401   81068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.007961919s)
	I0717 18:45:57.386463   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.386480   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.386925   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.386980   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.386999   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.387017   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.386958   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.387283   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.387304   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731240   81068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.351571451s)
	I0717 18:45:57.731287   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.731300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.731616   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.731650   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.731664   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731672   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.731685   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.731905   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.731930   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731949   81068 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-022930"
	I0717 18:45:57.731960   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.734601   81068 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 18:45:53.693038   80180 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.028164403s)
	I0717 18:45:53.693099   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:53.709020   80180 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:53.718790   80180 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:53.728384   80180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:53.728405   80180 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:53.728444   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:45:53.737315   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:53.737384   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:53.746336   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:45:53.754297   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:53.754347   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:53.763252   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:45:53.772186   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:53.772229   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:53.780829   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:45:53.788899   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:53.788955   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:53.797324   80180 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:53.982580   80180 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:57.735769   81068 addons.go:510] duration metric: took 1.784634456s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 18:45:57.742312   81068 pod_ready.go:92] pod "kube-proxy-hnb5v" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:57.742333   81068 pod_ready.go:81] duration metric: took 1.520854667s for pod "kube-proxy-hnb5v" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.742344   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.809858   81068 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:57.809885   81068 pod_ready.go:81] duration metric: took 67.527182ms for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.809896   81068 pod_ready.go:38] duration metric: took 1.606263576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:57.809914   81068 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:45:57.809972   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:45:57.847337   81068 api_server.go:72] duration metric: took 1.896234247s to wait for apiserver process to appear ...
	I0717 18:45:57.847366   81068 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:45:57.847391   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:45:57.853537   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 200:
	ok
	I0717 18:45:57.856587   81068 api_server.go:141] control plane version: v1.30.2
	I0717 18:45:57.856661   81068 api_server.go:131] duration metric: took 9.286402ms to wait for apiserver health ...
	I0717 18:45:57.856684   81068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:45:58.002336   81068 system_pods.go:59] 9 kube-system pods found
	I0717 18:45:58.002374   81068 system_pods.go:61] "coredns-7db6d8ff4d-fp4tg" [dc66092c-9183-4630-93cc-6ec4aa59a928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.002383   81068 system_pods.go:61] "coredns-7db6d8ff4d-jn64r" [35cbef26-555a-4693-afac-c739d9238a04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.002396   81068 system_pods.go:61] "etcd-default-k8s-diff-port-022930" [f83fd844-0ede-4638-b8c6-2ecdecbf4345] Running
	I0717 18:45:58.002402   81068 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-022930" [19fa3a0a-ab56-4163-b39f-2b12ce65d490] Running
	I0717 18:45:58.002408   81068 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-022930" [0037b401-ce9b-41f3-89de-47608a46a228] Running
	I0717 18:45:58.002414   81068 system_pods.go:61] "kube-proxy-hnb5v" [b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d] Running
	I0717 18:45:58.002418   81068 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-022930" [21fa54d0-9d90-492c-b90c-e5070dd2e350] Running
	I0717 18:45:58.002425   81068 system_pods.go:61] "metrics-server-569cc877fc-pfmwt" [39616dfc-215e-4af5-90f7-12fc28304494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:58.002435   81068 system_pods.go:61] "storage-provisioner" [d9b11611-2008-4a15-a661-62809bd1d4c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:58.002452   81068 system_pods.go:74] duration metric: took 145.752129ms to wait for pod list to return data ...
	I0717 18:45:58.002463   81068 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:45:58.197223   81068 default_sa.go:45] found service account: "default"
	I0717 18:45:58.197250   81068 default_sa.go:55] duration metric: took 194.774408ms for default service account to be created ...
	I0717 18:45:58.197260   81068 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:45:58.401825   81068 system_pods.go:86] 9 kube-system pods found
	I0717 18:45:58.401878   81068 system_pods.go:89] "coredns-7db6d8ff4d-fp4tg" [dc66092c-9183-4630-93cc-6ec4aa59a928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.401891   81068 system_pods.go:89] "coredns-7db6d8ff4d-jn64r" [35cbef26-555a-4693-afac-c739d9238a04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.401904   81068 system_pods.go:89] "etcd-default-k8s-diff-port-022930" [f83fd844-0ede-4638-b8c6-2ecdecbf4345] Running
	I0717 18:45:58.401917   81068 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-022930" [19fa3a0a-ab56-4163-b39f-2b12ce65d490] Running
	I0717 18:45:58.401927   81068 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-022930" [0037b401-ce9b-41f3-89de-47608a46a228] Running
	I0717 18:45:58.401935   81068 system_pods.go:89] "kube-proxy-hnb5v" [b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d] Running
	I0717 18:45:58.401940   81068 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-022930" [21fa54d0-9d90-492c-b90c-e5070dd2e350] Running
	I0717 18:45:58.401948   81068 system_pods.go:89] "metrics-server-569cc877fc-pfmwt" [39616dfc-215e-4af5-90f7-12fc28304494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:58.401956   81068 system_pods.go:89] "storage-provisioner" [d9b11611-2008-4a15-a661-62809bd1d4c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:58.401965   81068 system_pods.go:126] duration metric: took 204.700297ms to wait for k8s-apps to be running ...
	I0717 18:45:58.401975   81068 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:45:58.402024   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:58.416020   81068 system_svc.go:56] duration metric: took 14.023536ms WaitForService to wait for kubelet
	I0717 18:45:58.416056   81068 kubeadm.go:582] duration metric: took 2.464957357s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:45:58.416079   81068 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:45:58.598829   81068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:45:58.598863   81068 node_conditions.go:123] node cpu capacity is 2
	I0717 18:45:58.598876   81068 node_conditions.go:105] duration metric: took 182.791383ms to run NodePressure ...
	I0717 18:45:58.598891   81068 start.go:241] waiting for startup goroutines ...
	I0717 18:45:58.598899   81068 start.go:246] waiting for cluster config update ...
	I0717 18:45:58.598912   81068 start.go:255] writing updated cluster config ...
	I0717 18:45:58.599267   81068 ssh_runner.go:195] Run: rm -f paused
	I0717 18:45:58.661380   81068 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:45:58.663085   81068 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-022930" cluster and "default" namespace by default
	I0717 18:46:02.558673   80180 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:46:02.558766   80180 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:02.558842   80180 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:02.558980   80180 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:02.559118   80180 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:02.559210   80180 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:02.561934   80180 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:02.562036   80180 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:02.562108   80180 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:02.562191   80180 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:02.562290   80180 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:02.562393   80180 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:02.562478   80180 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:02.562565   80180 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:02.562643   80180 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:02.562711   80180 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:02.562826   80180 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:02.562886   80180 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:02.562958   80180 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:02.563005   80180 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:02.563081   80180 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:46:02.563136   80180 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:02.563210   80180 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:02.563293   80180 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:02.563405   80180 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:02.563468   80180 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:02.564989   80180 out.go:204]   - Booting up control plane ...
	I0717 18:46:02.565092   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:02.565181   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:02.565270   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:02.565400   80180 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:02.565526   80180 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:02.565597   80180 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:02.565783   80180 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:46:02.565880   80180 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:46:02.565959   80180 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.323304ms
	I0717 18:46:02.566046   80180 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:46:02.566105   80180 kubeadm.go:310] [api-check] The API server is healthy after 5.002038309s
	I0717 18:46:02.566206   80180 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:46:02.566307   80180 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:46:02.566359   80180 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:46:02.566525   80180 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-527415 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:46:02.566575   80180 kubeadm.go:310] [bootstrap-token] Using token: xeax16.7z40teb0jswemrgg
	I0717 18:46:02.568038   80180 out.go:204]   - Configuring RBAC rules ...
	I0717 18:46:02.568120   80180 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:46:02.568194   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:46:02.568314   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:46:02.568449   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:46:02.568553   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:46:02.568660   80180 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:46:02.568807   80180 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:46:02.568877   80180 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:46:02.568926   80180 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:46:02.568936   80180 kubeadm.go:310] 
	I0717 18:46:02.569032   80180 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:46:02.569044   80180 kubeadm.go:310] 
	I0717 18:46:02.569108   80180 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:46:02.569114   80180 kubeadm.go:310] 
	I0717 18:46:02.569157   80180 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:46:02.569249   80180 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:46:02.569326   80180 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:46:02.569346   80180 kubeadm.go:310] 
	I0717 18:46:02.569432   80180 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:46:02.569442   80180 kubeadm.go:310] 
	I0717 18:46:02.569511   80180 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:46:02.569519   80180 kubeadm.go:310] 
	I0717 18:46:02.569599   80180 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:46:02.569695   80180 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:46:02.569790   80180 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:46:02.569797   80180 kubeadm.go:310] 
	I0717 18:46:02.569905   80180 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:46:02.569985   80180 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:46:02.569998   80180 kubeadm.go:310] 
	I0717 18:46:02.570096   80180 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xeax16.7z40teb0jswemrgg \
	I0717 18:46:02.570234   80180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:46:02.570264   80180 kubeadm.go:310] 	--control-plane 
	I0717 18:46:02.570273   80180 kubeadm.go:310] 
	I0717 18:46:02.570348   80180 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:46:02.570355   80180 kubeadm.go:310] 
	I0717 18:46:02.570429   80180 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xeax16.7z40teb0jswemrgg \
	I0717 18:46:02.570555   80180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:46:02.570569   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:46:02.570578   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:46:02.571934   80180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:46:02.573034   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:46:02.583253   80180 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:46:02.603658   80180 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:46:02.603745   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-527415 minikube.k8s.io/updated_at=2024_07_17T18_46_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=embed-certs-527415 minikube.k8s.io/primary=true
	I0717 18:46:02.603745   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:02.621414   80180 ops.go:34] apiserver oom_adj: -16
	I0717 18:46:02.792226   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:03.292632   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:03.792270   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:04.293220   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:04.793011   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:05.292596   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:05.793043   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:06.293286   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:06.793069   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:07.292569   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:07.792604   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:08.293028   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:08.792259   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:09.292273   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:09.792672   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.293080   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.792442   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.292894   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.792436   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.292411   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.792327   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.292909   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.792878   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.293188   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.793038   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.292453   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.792367   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.898487   80180 kubeadm.go:1113] duration metric: took 13.294815165s to wait for elevateKubeSystemPrivileges
	I0717 18:46:15.898528   80180 kubeadm.go:394] duration metric: took 5m13.234208822s to StartCluster
	I0717 18:46:15.898546   80180 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:15.898626   80180 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:46:15.900239   80180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:15.900462   80180 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:46:15.900564   80180 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:46:15.900648   80180 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:46:15.900655   80180 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-527415"
	I0717 18:46:15.900667   80180 addons.go:69] Setting default-storageclass=true in profile "embed-certs-527415"
	I0717 18:46:15.900691   80180 addons.go:69] Setting metrics-server=true in profile "embed-certs-527415"
	I0717 18:46:15.900704   80180 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-527415"
	I0717 18:46:15.900709   80180 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-527415"
	I0717 18:46:15.900714   80180 addons.go:234] Setting addon metrics-server=true in "embed-certs-527415"
	W0717 18:46:15.900747   80180 addons.go:243] addon metrics-server should already be in state true
	I0717 18:46:15.900777   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	W0717 18:46:15.900715   80180 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:46:15.900852   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:46:15.901106   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901150   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.901152   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901183   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.901264   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901298   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.902177   80180 out.go:177] * Verifying Kubernetes components...
	I0717 18:46:15.903698   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:46:15.918294   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40333
	I0717 18:46:15.918295   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I0717 18:46:15.918859   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.918909   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.919433   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.919455   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.919478   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40379
	I0717 18:46:15.919548   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.919572   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.919788   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.919875   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.919883   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.920316   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.920323   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.920338   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.920345   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.920387   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.920425   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.920695   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.920890   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.924623   80180 addons.go:234] Setting addon default-storageclass=true in "embed-certs-527415"
	W0717 18:46:15.924644   80180 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:46:15.924672   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:46:15.925801   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.925830   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.936020   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
	I0717 18:46:15.936280   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0717 18:46:15.936365   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.936674   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.937144   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.937164   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.937229   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.937239   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.937565   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.937587   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.937770   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.937872   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.939671   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.939856   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.941929   80180 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:46:15.941934   80180 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:46:15.943632   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:46:15.943650   80180 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:46:15.943668   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.943715   80180 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:15.943724   80180 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:46:15.943737   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.946283   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33675
	I0717 18:46:15.946815   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.947230   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.947240   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.947272   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.947953   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.947987   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948001   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.948179   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.948223   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948248   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.948388   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.948604   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:15.948627   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.948653   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948832   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.948870   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.948895   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.949086   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.949307   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.949454   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:15.969385   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0717 18:46:15.969789   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.970221   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.970241   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.970756   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.970963   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.972631   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.972849   80180 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:15.972868   80180 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:46:15.972889   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.975680   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.976123   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.976187   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.976320   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.976496   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.976657   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.976748   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:16.134605   80180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:46:16.206139   80180 node_ready.go:35] waiting up to 6m0s for node "embed-certs-527415" to be "Ready" ...
	I0717 18:46:16.214532   80180 node_ready.go:49] node "embed-certs-527415" has status "Ready":"True"
	I0717 18:46:16.214550   80180 node_ready.go:38] duration metric: took 8.382109ms for node "embed-certs-527415" to be "Ready" ...
	I0717 18:46:16.214568   80180 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:46:16.223573   80180 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:16.254146   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:46:16.254166   80180 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:46:16.293257   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:16.312304   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:16.334927   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:46:16.334949   80180 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:46:16.404696   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:16.404723   80180 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:46:16.462835   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:17.281062   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281088   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281062   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281157   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281395   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281402   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281415   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281415   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281424   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281427   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281432   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281436   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281676   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.281678   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281700   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281705   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.281722   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281732   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.300264   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.300294   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.300592   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.300643   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.300672   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.489477   80180 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026593042s)
	I0717 18:46:17.489520   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.489534   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.490020   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.490047   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.490055   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.490068   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.490077   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.490344   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.490373   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.490384   80180 addons.go:475] Verifying addon metrics-server=true in "embed-certs-527415"
	I0717 18:46:17.490397   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.492257   80180 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 18:46:17.493487   80180 addons.go:510] duration metric: took 1.592928152s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 18:46:18.230569   80180 pod_ready.go:92] pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.230592   80180 pod_ready.go:81] duration metric: took 2.006995421s for pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.230603   80180 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.235298   80180 pod_ready.go:92] pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.235317   80180 pod_ready.go:81] duration metric: took 4.707534ms for pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.235327   80180 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.238998   80180 pod_ready.go:92] pod "etcd-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.239015   80180 pod_ready.go:81] duration metric: took 3.681191ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.239023   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.242949   80180 pod_ready.go:92] pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.242967   80180 pod_ready.go:81] duration metric: took 3.937614ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.242977   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.246567   80180 pod_ready.go:92] pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.246580   80180 pod_ready.go:81] duration metric: took 3.597434ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.246588   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m52fq" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.628607   80180 pod_ready.go:92] pod "kube-proxy-m52fq" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.628636   80180 pod_ready.go:81] duration metric: took 382.042151ms for pod "kube-proxy-m52fq" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.628650   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:19.028536   80180 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:19.028558   80180 pod_ready.go:81] duration metric: took 399.900565ms for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:19.028565   80180 pod_ready.go:38] duration metric: took 2.813989212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:46:19.028578   80180 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:46:19.028630   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:46:19.044787   80180 api_server.go:72] duration metric: took 3.144295616s to wait for apiserver process to appear ...
	I0717 18:46:19.044810   80180 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:46:19.044825   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:46:19.051106   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:46:19.052094   80180 api_server.go:141] control plane version: v1.30.2
	I0717 18:46:19.052111   80180 api_server.go:131] duration metric: took 7.296406ms to wait for apiserver health ...
	I0717 18:46:19.052117   80180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:46:19.231877   80180 system_pods.go:59] 9 kube-system pods found
	I0717 18:46:19.231905   80180 system_pods.go:61] "coredns-7db6d8ff4d-2zt8k" [5e2e90bb-5721-4ca8-8177-77e6b686175a] Running
	I0717 18:46:19.231912   80180 system_pods.go:61] "coredns-7db6d8ff4d-f64kh" [f0de6ef4-1402-44b2-81f3-3f234a72d151] Running
	I0717 18:46:19.231916   80180 system_pods.go:61] "etcd-embed-certs-527415" [79d210fe-c4d9-476f-ab78-cce3b98c1c95] Running
	I0717 18:46:19.231921   80180 system_pods.go:61] "kube-apiserver-embed-certs-527415" [8b43654e-7127-4e43-91e6-1239bf66661d] Running
	I0717 18:46:19.231925   80180 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [55da9f4c-566b-4f82-a700-236d117bd9a4] Running
	I0717 18:46:19.231929   80180 system_pods.go:61] "kube-proxy-m52fq" [40f99883-b343-43b3-8f94-4b45b379a17b] Running
	I0717 18:46:19.231934   80180 system_pods.go:61] "kube-scheduler-embed-certs-527415" [e6031b0b-5aa6-4827-b41a-a422d05c0b9a] Running
	I0717 18:46:19.231942   80180 system_pods.go:61] "metrics-server-569cc877fc-hvxtg" [05a18f70-4284-4315-892e-2850ac8b5050] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:46:19.231947   80180 system_pods.go:61] "storage-provisioner" [5f473bbe-0727-4f25-ba39-4ed322767465] Running
	I0717 18:46:19.231957   80180 system_pods.go:74] duration metric: took 179.833729ms to wait for pod list to return data ...
	I0717 18:46:19.231966   80180 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:46:19.427972   80180 default_sa.go:45] found service account: "default"
	I0717 18:46:19.427994   80180 default_sa.go:55] duration metric: took 196.021611ms for default service account to be created ...
	I0717 18:46:19.428002   80180 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:46:19.630730   80180 system_pods.go:86] 9 kube-system pods found
	I0717 18:46:19.630755   80180 system_pods.go:89] "coredns-7db6d8ff4d-2zt8k" [5e2e90bb-5721-4ca8-8177-77e6b686175a] Running
	I0717 18:46:19.630760   80180 system_pods.go:89] "coredns-7db6d8ff4d-f64kh" [f0de6ef4-1402-44b2-81f3-3f234a72d151] Running
	I0717 18:46:19.630765   80180 system_pods.go:89] "etcd-embed-certs-527415" [79d210fe-c4d9-476f-ab78-cce3b98c1c95] Running
	I0717 18:46:19.630769   80180 system_pods.go:89] "kube-apiserver-embed-certs-527415" [8b43654e-7127-4e43-91e6-1239bf66661d] Running
	I0717 18:46:19.630774   80180 system_pods.go:89] "kube-controller-manager-embed-certs-527415" [55da9f4c-566b-4f82-a700-236d117bd9a4] Running
	I0717 18:46:19.630778   80180 system_pods.go:89] "kube-proxy-m52fq" [40f99883-b343-43b3-8f94-4b45b379a17b] Running
	I0717 18:46:19.630782   80180 system_pods.go:89] "kube-scheduler-embed-certs-527415" [e6031b0b-5aa6-4827-b41a-a422d05c0b9a] Running
	I0717 18:46:19.630788   80180 system_pods.go:89] "metrics-server-569cc877fc-hvxtg" [05a18f70-4284-4315-892e-2850ac8b5050] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:46:19.630792   80180 system_pods.go:89] "storage-provisioner" [5f473bbe-0727-4f25-ba39-4ed322767465] Running
	I0717 18:46:19.630800   80180 system_pods.go:126] duration metric: took 202.793522ms to wait for k8s-apps to be running ...
	I0717 18:46:19.630806   80180 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:46:19.630849   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:19.646111   80180 system_svc.go:56] duration metric: took 15.296964ms WaitForService to wait for kubelet
	I0717 18:46:19.646133   80180 kubeadm.go:582] duration metric: took 3.745647205s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:46:19.646149   80180 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:46:19.828333   80180 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:46:19.828356   80180 node_conditions.go:123] node cpu capacity is 2
	I0717 18:46:19.828368   80180 node_conditions.go:105] duration metric: took 182.213813ms to run NodePressure ...
	I0717 18:46:19.828381   80180 start.go:241] waiting for startup goroutines ...
	I0717 18:46:19.828389   80180 start.go:246] waiting for cluster config update ...
	I0717 18:46:19.828401   80180 start.go:255] writing updated cluster config ...
	I0717 18:46:19.828690   80180 ssh_runner.go:195] Run: rm -f paused
	I0717 18:46:19.877774   80180 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:46:19.879769   80180 out.go:177] * Done! kubectl is now configured to use "embed-certs-527415" cluster and "default" namespace by default
	I0717 18:46:33.124646   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:46:33.124790   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:46:33.126245   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.126307   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.126409   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.126547   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.126673   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:33.126734   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:33.128541   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:33.128626   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:33.128707   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:33.128817   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:33.128901   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:33.129018   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:33.129091   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:33.129172   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:33.129249   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:33.129339   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:33.129408   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:33.129444   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:33.129532   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:33.129603   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:33.129665   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:33.129765   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:33.129812   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:33.129929   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:33.130037   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:33.130093   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:33.130177   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:33.131546   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:33.131652   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:33.131750   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:33.131858   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:33.131939   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:33.132085   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:46:33.132133   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:46:33.132189   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132355   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132419   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132585   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132657   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132839   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132900   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133143   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133248   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133452   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133460   80857 kubeadm.go:310] 
	I0717 18:46:33.133494   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:46:33.133529   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:46:33.133535   80857 kubeadm.go:310] 
	I0717 18:46:33.133564   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:46:33.133599   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:46:33.133727   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:46:33.133752   80857 kubeadm.go:310] 
	I0717 18:46:33.133905   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:46:33.133947   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:46:33.134002   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:46:33.134012   80857 kubeadm.go:310] 
	I0717 18:46:33.134116   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:46:33.134186   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:46:33.134193   80857 kubeadm.go:310] 
	I0717 18:46:33.134290   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:46:33.134367   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:46:33.134431   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:46:33.134491   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:46:33.134533   80857 kubeadm.go:310] 
	W0717 18:46:33.134615   80857 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 18:46:33.134669   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:46:33.590879   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:33.605393   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:46:33.614382   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:46:33.614405   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:46:33.614450   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:46:33.622849   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:46:33.622905   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:46:33.631852   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:46:33.640160   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:46:33.640211   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:46:33.648774   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.656740   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:46:33.656796   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.665799   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:46:33.674492   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:46:33.674547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:46:33.683627   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:46:33.746405   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.746472   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.881152   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.881297   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.881443   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:34.053199   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:34.055757   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:34.055843   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:34.055918   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:34.056030   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:34.056129   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:34.056232   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:34.056336   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:34.056431   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:34.056524   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:34.056656   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:34.056764   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:34.056824   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:34.056900   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:34.276456   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:34.491418   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:34.702265   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:34.874511   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:34.895484   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:34.896451   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:34.896536   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:35.040208   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:35.042291   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:35.042437   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:35.042565   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:35.044391   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:35.046206   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:35.050843   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:47:15.053070   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:47:15.053416   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:15.053586   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:20.053963   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:20.054207   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:30.054801   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:30.055011   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:50.055270   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:50.055465   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.053919   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:48:30.054133   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.054148   80857 kubeadm.go:310] 
	I0717 18:48:30.054231   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:48:30.054300   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:48:30.054326   80857 kubeadm.go:310] 
	I0717 18:48:30.054386   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:48:30.054443   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:48:30.054581   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:48:30.054593   80857 kubeadm.go:310] 
	I0717 18:48:30.054715   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:48:30.054761   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:48:30.054810   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:48:30.054818   80857 kubeadm.go:310] 
	I0717 18:48:30.054970   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:48:30.055069   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:48:30.055081   80857 kubeadm.go:310] 
	I0717 18:48:30.055236   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:48:30.055332   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:48:30.055396   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:48:30.055457   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:48:30.055483   80857 kubeadm.go:310] 
	I0717 18:48:30.056139   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:48:30.056246   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:48:30.056338   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:48:30.056413   80857 kubeadm.go:394] duration metric: took 8m2.908780359s to StartCluster
	I0717 18:48:30.056461   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:48:30.056524   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:48:30.102640   80857 cri.go:89] found id: ""
	I0717 18:48:30.102662   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.102669   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:48:30.102674   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:48:30.102724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:48:30.142516   80857 cri.go:89] found id: ""
	I0717 18:48:30.142548   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.142559   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:48:30.142567   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:48:30.142630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:48:30.178558   80857 cri.go:89] found id: ""
	I0717 18:48:30.178589   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.178598   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:48:30.178604   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:48:30.178677   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:48:30.211146   80857 cri.go:89] found id: ""
	I0717 18:48:30.211177   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.211186   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:48:30.211192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:48:30.211242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:48:30.244287   80857 cri.go:89] found id: ""
	I0717 18:48:30.244308   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.244314   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:48:30.244319   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:48:30.244364   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:48:30.274547   80857 cri.go:89] found id: ""
	I0717 18:48:30.274577   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.274587   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:48:30.274594   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:48:30.274660   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:48:30.306796   80857 cri.go:89] found id: ""
	I0717 18:48:30.306825   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.306835   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:48:30.306842   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:48:30.306903   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:48:30.341938   80857 cri.go:89] found id: ""
	I0717 18:48:30.341962   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.341972   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:48:30.341982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:48:30.341997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:48:30.407881   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:48:30.407925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:48:30.430885   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:48:30.430913   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:48:30.525366   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:48:30.525394   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:48:30.525408   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:48:30.639556   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:48:30.639588   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 18:48:30.677493   80857 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 18:48:30.677544   80857 out.go:239] * 
	W0717 18:48:30.677604   80857 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.677636   80857 out.go:239] * 
	W0717 18:48:30.678483   80857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:48:30.681792   80857 out.go:177] 
	W0717 18:48:30.682976   80857 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.683034   80857 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 18:48:30.683050   80857 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 18:48:30.684325   80857 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.017312821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242475017287542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5b6c126-523d-457a-8bd1-91bae8738401 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.018250513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94d5d128-2f84-4db2-ae33-5edc96060078 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.018336874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94d5d128-2f84-4db2-ae33-5edc96060078 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.018549010Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:219119fb42a606572c48ffd89d1db8c75d28f283757c8a3aceebcd1547002903,PodSandboxId:0712ba80efc2eeb4c0f7a4de9f9313bf552e435868cba09fe7e1e97faec06ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921506788534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-r9xns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29624b73-848d-4a35-96bc-92f9627842fe,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00db5d50aef9cac73ee3b0add4694bea89c8599c4239a5b742f76e0ad78b95b,PodSandboxId:b52438192d162323e79e91ecdf9a9388dfd4d1f64d74eee93274b3dce06e84b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921485923660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tx7nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 085ec394-1ca7-4b9b-9b54-b4fdab45bd75,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:412fe67a8c48127d6c17bfe9b629a684a421319e0d6df01e28e0cedc335b5b09,PodSandboxId:2e11af8b33d3f8a8f973acacc5e1704033b24b19c96a5535e1d901ca5d6d196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721241921060595751,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9730cf9-c0f1-4afc-94cc-cbd825158d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b6df40c85430006c2c744f618ad3022fb30f55be4f098adf069ae9a98e12db,PodSandboxId:ae6bd4e20bf24dd17924b7cbf69ea6fbac7c95bb15c90afc31ec91dbde1e8d39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721241919831588645,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaedb8f-b248-43ac-bd49-4f97d26aa1f6,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ebbc1444f5b55d1384bdec39e384901df7910fa5870cabef06fb9ae0d5804e,PodSandboxId:7ae8c4f26db78059b078cd1f618cd5ffaf77045de4d1bf3fd277a37153cc9672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721241909297118556,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c87a1116618d10cb40a78d0612b9d76a561cf9ad929a91228b68060259248098,PodSandboxId:28dcf525284313916749748cc137ac8a57ed031581a2e7c23485716f22bc769a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721241909219163696,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83db3e8a25d12043a2cc2b56a7a5959d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd3621d46cb4fa80bda24f313af1c43f26c07ceedfabfd7100a19a1d3c1b5ed,PodSandboxId:c41eeadad57288abf631f8a21d4c71283cf357404258655478ba36f54a1a7586,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721241909188988683,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b73ab2d9631fbbc6ef1f1e2293feaa,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f70542191feea896610d6dce3b898ef2a6258658ba06942c54e5fb1c8673788,PodSandboxId:5e139ace02f9f4c0e79bd5548ec0251e73c7a31dd5b70d024427a2a1afe0f6d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721241909150221436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab561c20ed9012c8eacc8441045039ea,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0d5f05d9fd479b650215b11c498b3356e4b5d8ba877723e612d3fb09c5675b0,PodSandboxId:33718ca4f01d9d6ddc6b368199f27792f545d366e1d33cac1f0f4b78841c2c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721241621567533537,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94d5d128-2f84-4db2-ae33-5edc96060078 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.055951720Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fca78cfc-ae3d-4825-a7c9-bf876d966144 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.056034654Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fca78cfc-ae3d-4825-a7c9-bf876d966144 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.057127965Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c0182dc-3848-455f-ba56-d357d23a04d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.057954446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242475057920803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c0182dc-3848-455f-ba56-d357d23a04d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.058470092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c05ee41d-d7a9-450c-b132-4696a18cd265 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.058558074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c05ee41d-d7a9-450c-b132-4696a18cd265 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.058940107Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:219119fb42a606572c48ffd89d1db8c75d28f283757c8a3aceebcd1547002903,PodSandboxId:0712ba80efc2eeb4c0f7a4de9f9313bf552e435868cba09fe7e1e97faec06ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921506788534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-r9xns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29624b73-848d-4a35-96bc-92f9627842fe,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00db5d50aef9cac73ee3b0add4694bea89c8599c4239a5b742f76e0ad78b95b,PodSandboxId:b52438192d162323e79e91ecdf9a9388dfd4d1f64d74eee93274b3dce06e84b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921485923660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tx7nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 085ec394-1ca7-4b9b-9b54-b4fdab45bd75,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:412fe67a8c48127d6c17bfe9b629a684a421319e0d6df01e28e0cedc335b5b09,PodSandboxId:2e11af8b33d3f8a8f973acacc5e1704033b24b19c96a5535e1d901ca5d6d196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721241921060595751,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9730cf9-c0f1-4afc-94cc-cbd825158d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b6df40c85430006c2c744f618ad3022fb30f55be4f098adf069ae9a98e12db,PodSandboxId:ae6bd4e20bf24dd17924b7cbf69ea6fbac7c95bb15c90afc31ec91dbde1e8d39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721241919831588645,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaedb8f-b248-43ac-bd49-4f97d26aa1f6,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ebbc1444f5b55d1384bdec39e384901df7910fa5870cabef06fb9ae0d5804e,PodSandboxId:7ae8c4f26db78059b078cd1f618cd5ffaf77045de4d1bf3fd277a37153cc9672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721241909297118556,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c87a1116618d10cb40a78d0612b9d76a561cf9ad929a91228b68060259248098,PodSandboxId:28dcf525284313916749748cc137ac8a57ed031581a2e7c23485716f22bc769a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721241909219163696,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83db3e8a25d12043a2cc2b56a7a5959d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd3621d46cb4fa80bda24f313af1c43f26c07ceedfabfd7100a19a1d3c1b5ed,PodSandboxId:c41eeadad57288abf631f8a21d4c71283cf357404258655478ba36f54a1a7586,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721241909188988683,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b73ab2d9631fbbc6ef1f1e2293feaa,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f70542191feea896610d6dce3b898ef2a6258658ba06942c54e5fb1c8673788,PodSandboxId:5e139ace02f9f4c0e79bd5548ec0251e73c7a31dd5b70d024427a2a1afe0f6d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721241909150221436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab561c20ed9012c8eacc8441045039ea,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0d5f05d9fd479b650215b11c498b3356e4b5d8ba877723e612d3fb09c5675b0,PodSandboxId:33718ca4f01d9d6ddc6b368199f27792f545d366e1d33cac1f0f4b78841c2c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721241621567533537,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c05ee41d-d7a9-450c-b132-4696a18cd265 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.099556701Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=386db9c9-6451-46db-8f3b-93ce8a6bec9d name=/runtime.v1.RuntimeService/Version
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.099712748Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=386db9c9-6451-46db-8f3b-93ce8a6bec9d name=/runtime.v1.RuntimeService/Version
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.100965564Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=448a0c7e-aa00-4bd8-8db6-2331c2dc9282 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.101366913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242475101337875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=448a0c7e-aa00-4bd8-8db6-2331c2dc9282 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.102113257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0ca27b4-4122-4faa-adab-7d1bdcad8ac1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.102191641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0ca27b4-4122-4faa-adab-7d1bdcad8ac1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.102493852Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:219119fb42a606572c48ffd89d1db8c75d28f283757c8a3aceebcd1547002903,PodSandboxId:0712ba80efc2eeb4c0f7a4de9f9313bf552e435868cba09fe7e1e97faec06ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921506788534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-r9xns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29624b73-848d-4a35-96bc-92f9627842fe,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00db5d50aef9cac73ee3b0add4694bea89c8599c4239a5b742f76e0ad78b95b,PodSandboxId:b52438192d162323e79e91ecdf9a9388dfd4d1f64d74eee93274b3dce06e84b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921485923660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tx7nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 085ec394-1ca7-4b9b-9b54-b4fdab45bd75,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:412fe67a8c48127d6c17bfe9b629a684a421319e0d6df01e28e0cedc335b5b09,PodSandboxId:2e11af8b33d3f8a8f973acacc5e1704033b24b19c96a5535e1d901ca5d6d196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721241921060595751,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9730cf9-c0f1-4afc-94cc-cbd825158d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b6df40c85430006c2c744f618ad3022fb30f55be4f098adf069ae9a98e12db,PodSandboxId:ae6bd4e20bf24dd17924b7cbf69ea6fbac7c95bb15c90afc31ec91dbde1e8d39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721241919831588645,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaedb8f-b248-43ac-bd49-4f97d26aa1f6,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ebbc1444f5b55d1384bdec39e384901df7910fa5870cabef06fb9ae0d5804e,PodSandboxId:7ae8c4f26db78059b078cd1f618cd5ffaf77045de4d1bf3fd277a37153cc9672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721241909297118556,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c87a1116618d10cb40a78d0612b9d76a561cf9ad929a91228b68060259248098,PodSandboxId:28dcf525284313916749748cc137ac8a57ed031581a2e7c23485716f22bc769a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721241909219163696,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83db3e8a25d12043a2cc2b56a7a5959d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd3621d46cb4fa80bda24f313af1c43f26c07ceedfabfd7100a19a1d3c1b5ed,PodSandboxId:c41eeadad57288abf631f8a21d4c71283cf357404258655478ba36f54a1a7586,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721241909188988683,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b73ab2d9631fbbc6ef1f1e2293feaa,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f70542191feea896610d6dce3b898ef2a6258658ba06942c54e5fb1c8673788,PodSandboxId:5e139ace02f9f4c0e79bd5548ec0251e73c7a31dd5b70d024427a2a1afe0f6d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721241909150221436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab561c20ed9012c8eacc8441045039ea,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0d5f05d9fd479b650215b11c498b3356e4b5d8ba877723e612d3fb09c5675b0,PodSandboxId:33718ca4f01d9d6ddc6b368199f27792f545d366e1d33cac1f0f4b78841c2c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721241621567533537,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0ca27b4-4122-4faa-adab-7d1bdcad8ac1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.136448790Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd2acb3c-891a-4ad7-abc3-4487a9b20004 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.136568001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd2acb3c-891a-4ad7-abc3-4487a9b20004 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.146581314Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34742643-e11a-4b67-a687-eaec44b0c3ec name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.147364074Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242475147324181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34742643-e11a-4b67-a687-eaec44b0c3ec name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.148006197Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4bcade3-1709-4422-9a80-ff23baf99da2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.148134780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4bcade3-1709-4422-9a80-ff23baf99da2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:54:35 no-preload-066175 crio[725]: time="2024-07-17 18:54:35.148409305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:219119fb42a606572c48ffd89d1db8c75d28f283757c8a3aceebcd1547002903,PodSandboxId:0712ba80efc2eeb4c0f7a4de9f9313bf552e435868cba09fe7e1e97faec06ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921506788534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-r9xns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29624b73-848d-4a35-96bc-92f9627842fe,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00db5d50aef9cac73ee3b0add4694bea89c8599c4239a5b742f76e0ad78b95b,PodSandboxId:b52438192d162323e79e91ecdf9a9388dfd4d1f64d74eee93274b3dce06e84b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921485923660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tx7nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 085ec394-1ca7-4b9b-9b54-b4fdab45bd75,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:412fe67a8c48127d6c17bfe9b629a684a421319e0d6df01e28e0cedc335b5b09,PodSandboxId:2e11af8b33d3f8a8f973acacc5e1704033b24b19c96a5535e1d901ca5d6d196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721241921060595751,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9730cf9-c0f1-4afc-94cc-cbd825158d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b6df40c85430006c2c744f618ad3022fb30f55be4f098adf069ae9a98e12db,PodSandboxId:ae6bd4e20bf24dd17924b7cbf69ea6fbac7c95bb15c90afc31ec91dbde1e8d39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721241919831588645,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaedb8f-b248-43ac-bd49-4f97d26aa1f6,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ebbc1444f5b55d1384bdec39e384901df7910fa5870cabef06fb9ae0d5804e,PodSandboxId:7ae8c4f26db78059b078cd1f618cd5ffaf77045de4d1bf3fd277a37153cc9672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721241909297118556,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c87a1116618d10cb40a78d0612b9d76a561cf9ad929a91228b68060259248098,PodSandboxId:28dcf525284313916749748cc137ac8a57ed031581a2e7c23485716f22bc769a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721241909219163696,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83db3e8a25d12043a2cc2b56a7a5959d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd3621d46cb4fa80bda24f313af1c43f26c07ceedfabfd7100a19a1d3c1b5ed,PodSandboxId:c41eeadad57288abf631f8a21d4c71283cf357404258655478ba36f54a1a7586,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721241909188988683,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b73ab2d9631fbbc6ef1f1e2293feaa,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f70542191feea896610d6dce3b898ef2a6258658ba06942c54e5fb1c8673788,PodSandboxId:5e139ace02f9f4c0e79bd5548ec0251e73c7a31dd5b70d024427a2a1afe0f6d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721241909150221436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab561c20ed9012c8eacc8441045039ea,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0d5f05d9fd479b650215b11c498b3356e4b5d8ba877723e612d3fb09c5675b0,PodSandboxId:33718ca4f01d9d6ddc6b368199f27792f545d366e1d33cac1f0f4b78841c2c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721241621567533537,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4bcade3-1709-4422-9a80-ff23baf99da2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	219119fb42a60       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   0712ba80efc2e       coredns-5cfdc65f69-r9xns
	b00db5d50aef9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   b52438192d162       coredns-5cfdc65f69-tx7nc
	412fe67a8c481       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   2e11af8b33d3f       storage-provisioner
	f8b6df40c8543       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   9 minutes ago       Running             kube-proxy                0                   ae6bd4e20bf24       kube-proxy-rgp5c
	35ebbc1444f5b       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   9 minutes ago       Running             kube-apiserver            2                   7ae8c4f26db78       kube-apiserver-no-preload-066175
	c87a1116618d1       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   9 minutes ago       Running             kube-controller-manager   2                   28dcf52528431       kube-controller-manager-no-preload-066175
	2dd3621d46cb4       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 minutes ago       Running             etcd                      2                   c41eeadad5728       etcd-no-preload-066175
	8f70542191fee       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   9 minutes ago       Running             kube-scheduler            2                   5e139ace02f9f       kube-scheduler-no-preload-066175
	b0d5f05d9fd47       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Exited              kube-apiserver            1                   33718ca4f01d9       kube-apiserver-no-preload-066175
	
	
	==> coredns [219119fb42a606572c48ffd89d1db8c75d28f283757c8a3aceebcd1547002903] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b00db5d50aef9cac73ee3b0add4694bea89c8599c4239a5b742f76e0ad78b95b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-066175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-066175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=no-preload-066175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_45_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:45:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-066175
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:54:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:50:30 +0000   Wed, 17 Jul 2024 18:45:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:50:30 +0000   Wed, 17 Jul 2024 18:45:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:50:30 +0000   Wed, 17 Jul 2024 18:45:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:50:30 +0000   Wed, 17 Jul 2024 18:45:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.216
	  Hostname:    no-preload-066175
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b465df7f5fd4451a211c0080bed4e39
	  System UUID:                5b465df7-f5fd-4451-a211-c0080bed4e39
	  Boot ID:                    ef1cf6fc-b36c-433e-8163-9cbb9e5eb3df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-r9xns                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m17s
	  kube-system                 coredns-5cfdc65f69-tx7nc                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m17s
	  kube-system                 etcd-no-preload-066175                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-no-preload-066175             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-no-preload-066175    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-rgp5c                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-scheduler-no-preload-066175             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-78fcd8795b-kj29z              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m15s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node no-preload-066175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node no-preload-066175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m27s)  kubelet          Node no-preload-066175 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node no-preload-066175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node no-preload-066175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node no-preload-066175 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m16s                  node-controller  Node no-preload-066175 event: Registered Node no-preload-066175 in Controller
	
	
	==> dmesg <==
	[  +0.036103] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.419250] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.679395] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.518588] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul17 18:40] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.059807] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053224] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.184103] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.115805] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.279970] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[ +14.546242] systemd-fstab-generator[1174]: Ignoring "noauto" option for root device
	[  +0.072059] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.577342] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +5.211006] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.269111] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.077301] kauditd_printk_skb: 25 callbacks suppressed
	[Jul17 18:45] systemd-fstab-generator[2943]: Ignoring "noauto" option for root device
	[  +0.063095] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.003910] systemd-fstab-generator[3265]: Ignoring "noauto" option for root device
	[  +0.084961] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.263596] systemd-fstab-generator[3377]: Ignoring "noauto" option for root device
	[  +0.097525] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.073978] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [2dd3621d46cb4fa80bda24f313af1c43f26c07ceedfabfd7100a19a1d3c1b5ed] <==
	{"level":"info","ts":"2024-07-17T18:45:09.599542Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-17T18:45:09.614129Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.72.216:2380"}
	{"level":"info","ts":"2024-07-17T18:45:09.614167Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.72.216:2380"}
	{"level":"info","ts":"2024-07-17T18:45:09.597115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14ddabf6165d7543 switched to configuration voters=(1503546924037141827)"}
	{"level":"info","ts":"2024-07-17T18:45:09.61472Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d730758011f9da75","local-member-id":"14ddabf6165d7543","added-peer-id":"14ddabf6165d7543","added-peer-peer-urls":["https://192.168.72.216:2380"]}
	{"level":"info","ts":"2024-07-17T18:45:09.946702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14ddabf6165d7543 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T18:45:09.946881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14ddabf6165d7543 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T18:45:09.946922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14ddabf6165d7543 received MsgPreVoteResp from 14ddabf6165d7543 at term 1"}
	{"level":"info","ts":"2024-07-17T18:45:09.946992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14ddabf6165d7543 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:09.947017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14ddabf6165d7543 received MsgVoteResp from 14ddabf6165d7543 at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:09.947082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14ddabf6165d7543 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:09.947108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 14ddabf6165d7543 elected leader 14ddabf6165d7543 at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:09.951859Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:09.952315Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"14ddabf6165d7543","local-member-attributes":"{Name:no-preload-066175 ClientURLs:[https://192.168.72.216:2379]}","request-path":"/0/members/14ddabf6165d7543/attributes","cluster-id":"d730758011f9da75","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:45:09.95267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:45:09.953102Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:45:09.955981Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T18:45:09.960899Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.216:2379"}
	{"level":"info","ts":"2024-07-17T18:45:09.961442Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T18:45:09.96419Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T18:45:09.964542Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d730758011f9da75","local-member-id":"14ddabf6165d7543","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:09.968696Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:09.96877Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:09.971676Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:45:09.971706Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:54:35 up 14 min,  0 users,  load average: 0.05, 0.15, 0.10
	Linux no-preload-066175 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [35ebbc1444f5b55d1384bdec39e384901df7910fa5870cabef06fb9ae0d5804e] <==
	W0717 18:50:12.628898       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 18:50:12.628990       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 18:50:12.630096       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 18:50:12.630109       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:51:12.630752       1 handler_proxy.go:99] no RequestInfo found in the context
	W0717 18:51:12.630960       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 18:51:12.631021       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0717 18:51:12.630958       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 18:51:12.632203       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 18:51:12.632252       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:53:12.632409       1 handler_proxy.go:99] no RequestInfo found in the context
	W0717 18:53:12.632764       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 18:53:12.632822       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0717 18:53:12.632829       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 18:53:12.633963       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 18:53:12.634015       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [b0d5f05d9fd479b650215b11c498b3356e4b5d8ba877723e612d3fb09c5675b0] <==
	W0717 18:45:01.696011       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.697396       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.703789       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.728238       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.756119       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.798300       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.806153       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.966431       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.118720       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.143938       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.148236       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.162904       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.284148       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.315010       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.365832       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.488534       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.622921       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.714564       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.149796       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.235916       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.351231       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.356058       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.472422       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.513947       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.703000       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c87a1116618d10cb40a78d0612b9d76a561cf9ad929a91228b68060259248098] <==
	E0717 18:49:19.486500       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:49:19.536308       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:49:49.494702       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:49:49.544950       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:50:19.500877       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:50:19.557407       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 18:50:30.036901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-066175"
	E0717 18:50:49.508839       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:50:49.565251       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 18:51:08.510811       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="310.179µs"
	E0717 18:51:19.514838       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:51:19.573149       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 18:51:20.511131       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="130.942µs"
	E0717 18:51:49.521357       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:51:49.582437       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:52:19.527738       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:52:19.599421       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:52:49.535980       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:52:49.607282       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:53:19.542177       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:53:19.615262       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:53:49.549509       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:53:49.623411       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:54:19.556304       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:54:19.640832       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f8b6df40c85430006c2c744f618ad3022fb30f55be4f098adf069ae9a98e12db] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 18:45:20.280767       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0717 18:45:20.290440       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.216"]
	E0717 18:45:20.290516       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0717 18:45:20.369075       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0717 18:45:20.369117       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:45:20.369152       1 server_linux.go:170] "Using iptables Proxier"
	I0717 18:45:20.372893       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0717 18:45:20.373126       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0717 18:45:20.373150       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:45:20.374721       1 config.go:197] "Starting service config controller"
	I0717 18:45:20.374745       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:45:20.374765       1 config.go:104] "Starting endpoint slice config controller"
	I0717 18:45:20.374769       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:45:20.375373       1 config.go:326] "Starting node config controller"
	I0717 18:45:20.375399       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:45:20.474876       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:45:20.474946       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:45:20.476382       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8f70542191feea896610d6dce3b898ef2a6258658ba06942c54e5fb1c8673788] <==
	W0717 18:45:11.671430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:45:11.671464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:11.672083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:45:11.672126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.512357       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:45:12.512409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.557583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:45:12.557678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.608394       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:45:12.608461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.738879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:45:12.738997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.838597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:45:12.838678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.938070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:45:12.938125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.938274       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:45:12.938306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.942146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:45:12.942199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.946781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:45:12.946877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:13.131445       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:45:13.131520       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0717 18:45:15.039226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 18:52:14 no-preload-066175 kubelet[3272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:52:14 no-preload-066175 kubelet[3272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:52:14 no-preload-066175 kubelet[3272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:52:14 no-preload-066175 kubelet[3272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:52:24 no-preload-066175 kubelet[3272]: E0717 18:52:24.492583    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:52:35 no-preload-066175 kubelet[3272]: E0717 18:52:35.492499    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:52:48 no-preload-066175 kubelet[3272]: E0717 18:52:48.494226    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:52:59 no-preload-066175 kubelet[3272]: E0717 18:52:59.492186    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:53:12 no-preload-066175 kubelet[3272]: E0717 18:53:12.494586    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:53:14 no-preload-066175 kubelet[3272]: E0717 18:53:14.533669    3272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:53:14 no-preload-066175 kubelet[3272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:53:14 no-preload-066175 kubelet[3272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:53:14 no-preload-066175 kubelet[3272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:53:14 no-preload-066175 kubelet[3272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:53:27 no-preload-066175 kubelet[3272]: E0717 18:53:27.492986    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:53:38 no-preload-066175 kubelet[3272]: E0717 18:53:38.492701    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:53:50 no-preload-066175 kubelet[3272]: E0717 18:53:50.492592    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:54:01 no-preload-066175 kubelet[3272]: E0717 18:54:01.492693    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:54:14 no-preload-066175 kubelet[3272]: E0717 18:54:14.531964    3272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:54:14 no-preload-066175 kubelet[3272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:54:14 no-preload-066175 kubelet[3272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:54:14 no-preload-066175 kubelet[3272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:54:14 no-preload-066175 kubelet[3272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:54:15 no-preload-066175 kubelet[3272]: E0717 18:54:15.492404    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:54:28 no-preload-066175 kubelet[3272]: E0717 18:54:28.496149    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	
	
	==> storage-provisioner [412fe67a8c48127d6c17bfe9b629a684a421319e0d6df01e28e0cedc335b5b09] <==
	I0717 18:45:21.196031       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:45:21.215021       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:45:21.215081       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:45:21.237919       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:45:21.238072       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-066175_72746047-1445-4b3c-b5b6-a3e5e3f7b418!
	I0717 18:45:21.239184       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c2af562a-0d45-40f0-ba2d-7c284b454a5b", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-066175_72746047-1445-4b3c-b5b6-a3e5e3f7b418 became leader
	I0717 18:45:21.338978       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-066175_72746047-1445-4b3c-b5b6-a3e5e3f7b418!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-066175 -n no-preload-066175
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-066175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-kj29z
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-066175 describe pod metrics-server-78fcd8795b-kj29z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-066175 describe pod metrics-server-78fcd8795b-kj29z: exit status 1 (61.738209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-kj29z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-066175 describe pod metrics-server-78fcd8795b-kj29z: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 18:46:07.656057   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-022930 -n default-k8s-diff-port-022930
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-17 18:54:59.182269395 +0000 UTC m=+6213.911465351
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-022930 -n default-k8s-diff-port-022930
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-022930 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-022930 logs -n 25: (2.043061022s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-527415            | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-371172                                        | pause-371172                 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-341716 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | disable-driver-mounts-341716                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:34 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-066175             | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC | 17 Jul 24 18:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-066175                                   | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-022930  | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC | 17 Jul 24 18:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-527415                 | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-019549        | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-066175                  | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-066175 --memory=2200                     | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:45 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-019549             | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-022930       | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC | 17 Jul 24 18:45 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:37:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:37:14.473404   81068 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:37:14.473526   81068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:14.473535   81068 out.go:304] Setting ErrFile to fd 2...
	I0717 18:37:14.473540   81068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:14.473714   81068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:37:14.474251   81068 out.go:298] Setting JSON to false
	I0717 18:37:14.475115   81068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8377,"bootTime":1721233057,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:37:14.475172   81068 start.go:139] virtualization: kvm guest
	I0717 18:37:14.477356   81068 out.go:177] * [default-k8s-diff-port-022930] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:37:14.478600   81068 notify.go:220] Checking for updates...
	I0717 18:37:14.478615   81068 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:37:14.480094   81068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:37:14.481516   81068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:37:14.482886   81068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:37:14.484159   81068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:37:14.485449   81068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:37:14.487164   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:37:14.487744   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:14.487795   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:14.502368   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0717 18:37:14.502712   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:14.503192   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:37:14.503213   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:14.503574   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:14.503778   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:37:14.504032   81068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:37:14.504326   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:14.504381   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:14.518330   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0717 18:37:14.518718   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:14.519095   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:37:14.519114   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:14.519409   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:14.519578   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:37:14.549923   81068 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:37:14.551160   81068 start.go:297] selected driver: kvm2
	I0717 18:37:14.551175   81068 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:37:14.551302   81068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:37:14.551931   81068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:37:14.552008   81068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:37:14.566038   81068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:37:14.566371   81068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:37:14.566443   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:37:14.566466   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:37:14.566516   81068 start.go:340] cluster config:
	{Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:37:14.566643   81068 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:37:14.568602   81068 out.go:177] * Starting "default-k8s-diff-port-022930" primary control-plane node in "default-k8s-diff-port-022930" cluster
	I0717 18:37:13.057187   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:16.129274   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:14.569868   81068 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:37:14.569908   81068 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:37:14.569919   81068 cache.go:56] Caching tarball of preloaded images
	I0717 18:37:14.569992   81068 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:37:14.570003   81068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:37:14.570100   81068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/config.json ...
	I0717 18:37:14.570277   81068 start.go:360] acquireMachinesLock for default-k8s-diff-port-022930: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:37:22.209207   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:25.281226   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:31.361221   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:34.433258   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:40.513234   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:43.585225   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:49.665198   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:52.737256   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:58.817201   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:01.889213   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:07.969247   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:11.041264   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:17.121227   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:20.193250   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:26.273206   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:29.345193   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:35.425259   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:38.497261   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:44.577185   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:47.649306   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:53.729234   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:56.801257   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:02.881239   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:05.953258   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:12.033251   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:15.105230   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:21.185200   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:24.257195   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:30.337181   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:33.409224   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:39.489219   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:42.561250   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:45.565739   80401 start.go:364] duration metric: took 4m11.345351864s to acquireMachinesLock for "no-preload-066175"
	I0717 18:39:45.565801   80401 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:39:45.565807   80401 fix.go:54] fixHost starting: 
	I0717 18:39:45.566167   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:39:45.566198   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:39:45.580996   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45665
	I0717 18:39:45.581389   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:39:45.581797   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:39:45.581817   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:39:45.582145   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:39:45.582323   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:39:45.582467   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:39:45.584074   80401 fix.go:112] recreateIfNeeded on no-preload-066175: state=Stopped err=<nil>
	I0717 18:39:45.584109   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	W0717 18:39:45.584260   80401 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:39:45.586842   80401 out.go:177] * Restarting existing kvm2 VM for "no-preload-066175" ...
	I0717 18:39:45.563046   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:39:45.563105   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:39:45.563521   80180 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:39:45.563555   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:39:45.563758   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:39:45.565594   80180 machine.go:97] duration metric: took 4m37.427146226s to provisionDockerMachine
	I0717 18:39:45.565643   80180 fix.go:56] duration metric: took 4m37.448013968s for fixHost
	I0717 18:39:45.565651   80180 start.go:83] releasing machines lock for "embed-certs-527415", held for 4m37.448033785s
	W0717 18:39:45.565675   80180 start.go:714] error starting host: provision: host is not running
	W0717 18:39:45.565775   80180 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 18:39:45.565784   80180 start.go:729] Will try again in 5 seconds ...
	I0717 18:39:45.587901   80401 main.go:141] libmachine: (no-preload-066175) Calling .Start
	I0717 18:39:45.588046   80401 main.go:141] libmachine: (no-preload-066175) Ensuring networks are active...
	I0717 18:39:45.588666   80401 main.go:141] libmachine: (no-preload-066175) Ensuring network default is active
	I0717 18:39:45.589012   80401 main.go:141] libmachine: (no-preload-066175) Ensuring network mk-no-preload-066175 is active
	I0717 18:39:45.589386   80401 main.go:141] libmachine: (no-preload-066175) Getting domain xml...
	I0717 18:39:45.589959   80401 main.go:141] libmachine: (no-preload-066175) Creating domain...
	I0717 18:39:46.785717   80401 main.go:141] libmachine: (no-preload-066175) Waiting to get IP...
	I0717 18:39:46.786495   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:46.786912   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:46.786974   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:46.786888   81612 retry.go:31] will retry after 301.458026ms: waiting for machine to come up
	I0717 18:39:47.090556   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.091129   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.091154   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.091098   81612 retry.go:31] will retry after 347.107185ms: waiting for machine to come up
	I0717 18:39:47.439530   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.440010   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.440033   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.439947   81612 retry.go:31] will retry after 436.981893ms: waiting for machine to come up
	I0717 18:39:47.878684   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.879091   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.879120   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.879051   81612 retry.go:31] will retry after 582.942833ms: waiting for machine to come up
	I0717 18:39:48.464068   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:48.464568   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:48.464593   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:48.464513   81612 retry.go:31] will retry after 633.101908ms: waiting for machine to come up
	I0717 18:39:49.099383   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:49.099762   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:49.099784   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:49.099720   81612 retry.go:31] will retry after 847.181679ms: waiting for machine to come up
	I0717 18:39:50.567294   80180 start.go:360] acquireMachinesLock for embed-certs-527415: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:39:49.948696   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:49.949228   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:49.949260   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:49.949188   81612 retry.go:31] will retry after 1.048891217s: waiting for machine to come up
	I0717 18:39:50.999658   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:51.000062   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:51.000099   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:51.000001   81612 retry.go:31] will retry after 942.285454ms: waiting for machine to come up
	I0717 18:39:51.944171   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:51.944676   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:51.944702   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:51.944632   81612 retry.go:31] will retry after 1.21768861s: waiting for machine to come up
	I0717 18:39:53.163883   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:53.164345   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:53.164368   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:53.164305   81612 retry.go:31] will retry after 1.505905193s: waiting for machine to come up
	I0717 18:39:54.671532   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:54.671951   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:54.671977   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:54.671918   81612 retry.go:31] will retry after 2.885547597s: waiting for machine to come up
	I0717 18:39:57.560375   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:57.560878   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:57.560902   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:57.560830   81612 retry.go:31] will retry after 3.53251124s: waiting for machine to come up
	I0717 18:40:02.249487   80857 start.go:364] duration metric: took 3m17.095542929s to acquireMachinesLock for "old-k8s-version-019549"
	I0717 18:40:02.249548   80857 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:02.249556   80857 fix.go:54] fixHost starting: 
	I0717 18:40:02.249946   80857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:02.249976   80857 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:02.269365   80857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0717 18:40:02.269715   80857 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:02.270182   80857 main.go:141] libmachine: Using API Version  1
	I0717 18:40:02.270205   80857 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:02.270534   80857 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:02.270738   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:02.270875   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetState
	I0717 18:40:02.272408   80857 fix.go:112] recreateIfNeeded on old-k8s-version-019549: state=Stopped err=<nil>
	I0717 18:40:02.272443   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	W0717 18:40:02.272597   80857 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:02.274702   80857 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-019549" ...
	I0717 18:40:01.094975   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.095556   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has current primary IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.095579   80401 main.go:141] libmachine: (no-preload-066175) Found IP for machine: 192.168.72.216
	I0717 18:40:01.095592   80401 main.go:141] libmachine: (no-preload-066175) Reserving static IP address...
	I0717 18:40:01.095955   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.095980   80401 main.go:141] libmachine: (no-preload-066175) DBG | skip adding static IP to network mk-no-preload-066175 - found existing host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"}
	I0717 18:40:01.095989   80401 main.go:141] libmachine: (no-preload-066175) Reserved static IP address: 192.168.72.216
	I0717 18:40:01.096000   80401 main.go:141] libmachine: (no-preload-066175) Waiting for SSH to be available...
	I0717 18:40:01.096010   80401 main.go:141] libmachine: (no-preload-066175) DBG | Getting to WaitForSSH function...
	I0717 18:40:01.098163   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.098498   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.098521   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.098631   80401 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH client type: external
	I0717 18:40:01.098657   80401 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa (-rw-------)
	I0717 18:40:01.098692   80401 main.go:141] libmachine: (no-preload-066175) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:01.098707   80401 main.go:141] libmachine: (no-preload-066175) DBG | About to run SSH command:
	I0717 18:40:01.098720   80401 main.go:141] libmachine: (no-preload-066175) DBG | exit 0
	I0717 18:40:01.216740   80401 main.go:141] libmachine: (no-preload-066175) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:01.217099   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetConfigRaw
	I0717 18:40:01.217706   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:01.220160   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.220461   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.220492   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.220656   80401 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/config.json ...
	I0717 18:40:01.220843   80401 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:01.220860   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:01.221067   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.223044   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.223347   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.223371   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.223531   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.223719   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.223864   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.223980   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.224125   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.224332   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.224345   80401 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:01.321053   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:01.321083   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.321333   80401 buildroot.go:166] provisioning hostname "no-preload-066175"
	I0717 18:40:01.321359   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.321529   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.323945   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.324269   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.324297   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.324421   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.324582   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.324724   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.324837   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.324996   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.325162   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.325175   80401 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-066175 && echo "no-preload-066175" | sudo tee /etc/hostname
	I0717 18:40:01.435003   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-066175
	
	I0717 18:40:01.435033   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.437795   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.438113   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.438155   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.438344   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.438533   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.438692   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.438803   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.438948   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.439094   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.439108   80401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-066175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-066175/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-066175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:01.540598   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:01.540631   80401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:01.540650   80401 buildroot.go:174] setting up certificates
	I0717 18:40:01.540660   80401 provision.go:84] configureAuth start
	I0717 18:40:01.540669   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.540977   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:01.543503   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.543788   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.543817   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.543907   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.545954   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.546261   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.546280   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.546415   80401 provision.go:143] copyHostCerts
	I0717 18:40:01.546483   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:01.546498   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:01.546596   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:01.546730   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:01.546743   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:01.546788   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:01.546878   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:01.546888   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:01.546921   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:01.547054   80401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.no-preload-066175 san=[127.0.0.1 192.168.72.216 localhost minikube no-preload-066175]
	I0717 18:40:01.628522   80401 provision.go:177] copyRemoteCerts
	I0717 18:40:01.628574   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:01.628596   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.631306   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.631714   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.631761   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.631876   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.632050   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.632210   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.632330   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:01.711344   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:01.738565   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 18:40:01.765888   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:40:01.790852   80401 provision.go:87] duration metric: took 250.181586ms to configureAuth
	I0717 18:40:01.790874   80401 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:01.791046   80401 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:40:01.791111   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.793530   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.793922   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.793945   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.794095   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.794323   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.794497   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.794635   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.794786   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.794955   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.794969   80401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:02.032506   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:02.032543   80401 machine.go:97] duration metric: took 811.687511ms to provisionDockerMachine
	I0717 18:40:02.032554   80401 start.go:293] postStartSetup for "no-preload-066175" (driver="kvm2")
	I0717 18:40:02.032567   80401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:02.032596   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.032921   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:02.032966   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.035429   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.035731   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.035767   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.035921   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.036081   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.036351   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.036493   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.114601   80401 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:02.118230   80401 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:02.118247   80401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:02.118308   80401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:02.118384   80401 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:02.118592   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:02.126753   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:02.148028   80401 start.go:296] duration metric: took 115.461293ms for postStartSetup
	I0717 18:40:02.148066   80401 fix.go:56] duration metric: took 16.582258787s for fixHost
	I0717 18:40:02.148084   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.150550   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.150917   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.150949   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.151061   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.151242   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.151394   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.151513   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.151658   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:02.151828   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:02.151841   80401 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:02.249303   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241602.223072082
	
	I0717 18:40:02.249334   80401 fix.go:216] guest clock: 1721241602.223072082
	I0717 18:40:02.249344   80401 fix.go:229] Guest: 2024-07-17 18:40:02.223072082 +0000 UTC Remote: 2024-07-17 18:40:02.14806999 +0000 UTC m=+268.060359078 (delta=75.002092ms)
	I0717 18:40:02.249388   80401 fix.go:200] guest clock delta is within tolerance: 75.002092ms
	I0717 18:40:02.249396   80401 start.go:83] releasing machines lock for "no-preload-066175", held for 16.683615057s
	I0717 18:40:02.249442   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.249735   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:02.252545   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.252896   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.252929   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.253053   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253516   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253700   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253770   80401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:02.253803   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.253913   80401 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:02.253937   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.256152   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256462   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.256501   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256558   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.256616   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256718   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.256879   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.257013   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.257021   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.257038   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.257158   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.257312   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.257469   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.257604   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.376103   80401 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:02.381639   80401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:02.529357   80401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:02.536396   80401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:02.536463   80401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:02.555045   80401 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:02.555067   80401 start.go:495] detecting cgroup driver to use...
	I0717 18:40:02.555130   80401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:02.570540   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:02.583804   80401 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:02.583867   80401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:02.596657   80401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:02.610371   80401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:02.717489   80401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:02.875146   80401 docker.go:233] disabling docker service ...
	I0717 18:40:02.875235   80401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:02.895657   80401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:02.908366   80401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:03.018375   80401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:03.143922   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:03.160599   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:03.180643   80401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 18:40:03.180709   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.190040   80401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:03.190097   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.199275   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.208647   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.217750   80401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:03.226808   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.235779   80401 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.251451   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.261476   80401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:03.269978   80401 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:03.270028   80401 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:03.280901   80401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:03.290184   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:03.409167   80401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:03.541153   80401 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:03.541218   80401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:03.546012   80401 start.go:563] Will wait 60s for crictl version
	I0717 18:40:03.546059   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:03.549567   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:03.588396   80401 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:03.588467   80401 ssh_runner.go:195] Run: crio --version
	I0717 18:40:03.622472   80401 ssh_runner.go:195] Run: crio --version
	I0717 18:40:03.652180   80401 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 18:40:03.653613   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:03.656560   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:03.656959   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:03.656987   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:03.657222   80401 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:03.661102   80401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:03.673078   80401 kubeadm.go:883] updating cluster {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:03.673212   80401 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 18:40:03.673248   80401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:03.703959   80401 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 18:40:03.703986   80401 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:40:03.704042   80401 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:03.704078   80401 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.704095   80401 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.704114   80401 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.704150   80401 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.704077   80401 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.704168   80401 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 18:40:03.704243   80401 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.705787   80401 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:03.705795   80401 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.705801   80401 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.705787   80401 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.705792   80401 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.705816   80401 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.705829   80401 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 18:40:03.706094   80401 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.925413   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.930827   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 18:40:03.963901   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.964215   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.966162   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.970852   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.973664   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.997849   80401 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 18:40:03.997912   80401 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.997969   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118851   80401 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 18:40:04.118888   80401 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:04.118892   80401 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 18:40:04.118924   80401 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:04.118934   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118943   80401 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 18:40:04.118969   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118969   80401 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:04.119001   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.119027   80401 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 18:40:04.119058   80401 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:04.119089   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:04.119104   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.119065   80401 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 18:40:04.119136   80401 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:04.119159   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:02.275985   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .Start
	I0717 18:40:02.276143   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring networks are active...
	I0717 18:40:02.276898   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network default is active
	I0717 18:40:02.277333   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network mk-old-k8s-version-019549 is active
	I0717 18:40:02.277796   80857 main.go:141] libmachine: (old-k8s-version-019549) Getting domain xml...
	I0717 18:40:02.278481   80857 main.go:141] libmachine: (old-k8s-version-019549) Creating domain...
	I0717 18:40:03.571325   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting to get IP...
	I0717 18:40:03.572359   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.572836   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.572968   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.572816   81751 retry.go:31] will retry after 301.991284ms: waiting for machine to come up
	I0717 18:40:03.876263   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.876688   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.876715   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.876637   81751 retry.go:31] will retry after 286.461163ms: waiting for machine to come up
	I0717 18:40:04.165366   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.165873   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.165902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.165811   81751 retry.go:31] will retry after 383.479108ms: waiting for machine to come up
	I0717 18:40:04.551152   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.551615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.551650   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.551589   81751 retry.go:31] will retry after 429.076714ms: waiting for machine to come up
	I0717 18:40:04.982157   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.982517   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.982545   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.982470   81751 retry.go:31] will retry after 553.684035ms: waiting for machine to come up
	I0717 18:40:04.122952   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:04.130590   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:04.130741   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:04.200609   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:04.200631   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:04.200643   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 18:40:04.200728   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:04.200741   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.200815   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.212034   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 18:40:04.212057   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 18:40:04.212113   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:04.212123   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:04.259447   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:04.259525   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 18:40:04.259548   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.259552   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:04.259553   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 18:40:04.259534   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:04.259588   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.259591   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 18:40:04.259628   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 18:40:04.259639   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:04.550060   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:06.236639   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.976976668s)
	I0717 18:40:06.236683   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 18:40:06.236691   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.97711629s)
	I0717 18:40:06.236718   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 18:40:06.236732   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.977125153s)
	I0717 18:40:06.236752   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 18:40:06.236776   80401 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:06.236854   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:06.236781   80401 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.68669473s)
	I0717 18:40:06.236908   80401 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 18:40:06.236951   80401 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:06.236994   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:08.107122   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.870244887s)
	I0717 18:40:08.107152   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 18:40:08.107175   80401 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:08.107203   80401 ssh_runner.go:235] Completed: which crictl: (1.870188554s)
	I0717 18:40:08.107224   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:08.107261   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:08.146817   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 18:40:08.146932   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:05.538229   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:05.538753   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:05.538777   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:05.538702   81751 retry.go:31] will retry after 747.130907ms: waiting for machine to come up
	I0717 18:40:06.287146   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:06.287626   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:06.287665   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:06.287581   81751 retry.go:31] will retry after 1.171580264s: waiting for machine to come up
	I0717 18:40:07.461393   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:07.462015   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:07.462046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:07.461963   81751 retry.go:31] will retry after 1.199265198s: waiting for machine to come up
	I0717 18:40:08.663340   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:08.663789   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:08.663815   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:08.663745   81751 retry.go:31] will retry after 1.621895351s: waiting for machine to come up
	I0717 18:40:11.404193   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.296944718s)
	I0717 18:40:11.404228   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 18:40:11.404248   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:11.404245   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.257289666s)
	I0717 18:40:11.404272   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 18:40:11.404294   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:13.370389   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966067238s)
	I0717 18:40:13.370426   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 18:40:13.370455   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:13.370505   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:10.287596   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:10.288019   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:10.288046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:10.287964   81751 retry.go:31] will retry after 1.748504204s: waiting for machine to come up
	I0717 18:40:12.038137   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:12.038582   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:12.038615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:12.038532   81751 retry.go:31] will retry after 2.477996004s: waiting for machine to come up
	I0717 18:40:14.517788   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:14.518175   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:14.518203   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:14.518123   81751 retry.go:31] will retry after 3.29313184s: waiting for machine to come up
	I0717 18:40:19.093608   81068 start.go:364] duration metric: took 3m4.523289209s to acquireMachinesLock for "default-k8s-diff-port-022930"
	I0717 18:40:19.093694   81068 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:19.093705   81068 fix.go:54] fixHost starting: 
	I0717 18:40:19.094122   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:19.094157   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:19.113793   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0717 18:40:19.114236   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:19.114755   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:40:19.114775   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:19.115110   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:19.115294   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:19.115434   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:40:19.117072   81068 fix.go:112] recreateIfNeeded on default-k8s-diff-port-022930: state=Stopped err=<nil>
	I0717 18:40:19.117109   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	W0717 18:40:19.117256   81068 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:19.120986   81068 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-022930" ...
	I0717 18:40:15.214734   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.844202729s)
	I0717 18:40:15.214756   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 18:40:15.214777   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:15.214814   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:17.066570   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.851726063s)
	I0717 18:40:17.066604   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 18:40:17.066629   80401 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:17.066679   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:17.703556   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 18:40:17.703614   80401 cache_images.go:123] Successfully loaded all cached images
	I0717 18:40:17.703624   80401 cache_images.go:92] duration metric: took 13.999623105s to LoadCachedImages
	I0717 18:40:17.703638   80401 kubeadm.go:934] updating node { 192.168.72.216 8443 v1.31.0-beta.0 crio true true} ...
	I0717 18:40:17.703754   80401 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-066175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:17.703830   80401 ssh_runner.go:195] Run: crio config
	I0717 18:40:17.753110   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:40:17.753138   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:17.753159   80401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:17.753190   80401 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.216 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-066175 NodeName:no-preload-066175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:40:17.753404   80401 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-066175"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:17.753492   80401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 18:40:17.763417   80401 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:17.763491   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:17.772139   80401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 18:40:17.786982   80401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 18:40:17.801327   80401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 18:40:17.816796   80401 ssh_runner.go:195] Run: grep 192.168.72.216	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:17.820354   80401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:17.834155   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:17.970222   80401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:17.989953   80401 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175 for IP: 192.168.72.216
	I0717 18:40:17.989977   80401 certs.go:194] generating shared ca certs ...
	I0717 18:40:17.989998   80401 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:17.990160   80401 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:17.990217   80401 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:17.990231   80401 certs.go:256] generating profile certs ...
	I0717 18:40:17.990365   80401 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.key
	I0717 18:40:17.990460   80401 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672
	I0717 18:40:17.990509   80401 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key
	I0717 18:40:17.990679   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:17.990723   80401 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:17.990740   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:17.990772   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:17.990813   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:17.990846   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:17.990905   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:17.991590   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:18.035349   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:18.079539   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:18.110382   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:18.135920   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:40:18.168675   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:18.196132   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:18.230418   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:40:18.254319   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:18.277293   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:18.301416   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:18.330021   80401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:18.348803   80401 ssh_runner.go:195] Run: openssl version
	I0717 18:40:18.355126   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:18.366004   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.370221   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.370287   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.375799   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:18.385991   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:18.396141   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.400451   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.400526   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.406203   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:18.419059   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:18.429450   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.433742   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.433794   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.439261   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:18.450327   80401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:18.454734   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:18.460256   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:18.465766   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:18.471349   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:18.476780   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:18.482509   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:18.488138   80401 kubeadm.go:392] StartCluster: {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:18.488229   80401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:18.488270   80401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:18.532219   80401 cri.go:89] found id: ""
	I0717 18:40:18.532318   80401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:18.542632   80401 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:18.542655   80401 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:18.542699   80401 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:18.552352   80401 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:18.553351   80401 kubeconfig.go:125] found "no-preload-066175" server: "https://192.168.72.216:8443"
	I0717 18:40:18.555295   80401 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:18.565857   80401 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.216
	I0717 18:40:18.565892   80401 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:18.565905   80401 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:18.565958   80401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:18.605512   80401 cri.go:89] found id: ""
	I0717 18:40:18.605593   80401 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:18.622235   80401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:18.633175   80401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:18.633196   80401 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:18.633241   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:40:18.641969   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:18.642023   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:18.651017   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:40:18.659619   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:18.659667   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:18.668008   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:40:18.675985   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:18.676037   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:18.685937   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:40:18.695574   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:18.695624   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:18.706040   80401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:18.717397   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:18.836009   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:19.122366   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Start
	I0717 18:40:19.122530   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring networks are active...
	I0717 18:40:19.123330   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring network default is active
	I0717 18:40:19.123832   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring network mk-default-k8s-diff-port-022930 is active
	I0717 18:40:19.124268   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Getting domain xml...
	I0717 18:40:19.124922   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Creating domain...
	I0717 18:40:17.813673   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814213   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has current primary IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814242   80857 main.go:141] libmachine: (old-k8s-version-019549) Found IP for machine: 192.168.39.128
	I0717 18:40:17.814277   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserving static IP address...
	I0717 18:40:17.814720   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserved static IP address: 192.168.39.128
	I0717 18:40:17.814738   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting for SSH to be available...
	I0717 18:40:17.814762   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.814783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | skip adding static IP to network mk-old-k8s-version-019549 - found existing host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"}
	I0717 18:40:17.814796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Getting to WaitForSSH function...
	I0717 18:40:17.817314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817714   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.817743   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH client type: external
	I0717 18:40:17.817944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa (-rw-------)
	I0717 18:40:17.817971   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:17.817984   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | About to run SSH command:
	I0717 18:40:17.818000   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | exit 0
	I0717 18:40:17.945902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:17.946262   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetConfigRaw
	I0717 18:40:17.946907   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:17.949757   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950158   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.950178   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950474   80857 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/config.json ...
	I0717 18:40:17.950706   80857 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:17.950728   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:17.950941   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:17.953738   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954141   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.954184   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954282   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:17.954456   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954617   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954790   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:17.954957   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:17.955121   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:17.955131   80857 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:18.061082   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:18.061113   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061405   80857 buildroot.go:166] provisioning hostname "old-k8s-version-019549"
	I0717 18:40:18.061432   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061685   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.064855   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.065348   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065537   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.065777   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.065929   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.066118   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.066329   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.066547   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.066564   80857 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-019549 && echo "old-k8s-version-019549" | sudo tee /etc/hostname
	I0717 18:40:18.191467   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-019549
	
	I0717 18:40:18.191517   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.194917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195455   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.195502   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195714   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.195908   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196105   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196288   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.196483   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.196708   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.196731   80857 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-019549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-019549/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-019549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:18.315020   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:18.315047   80857 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:18.315065   80857 buildroot.go:174] setting up certificates
	I0717 18:40:18.315078   80857 provision.go:84] configureAuth start
	I0717 18:40:18.315090   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.315358   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:18.318342   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.318796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.318826   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.319078   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.321562   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.321914   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.321944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.322125   80857 provision.go:143] copyHostCerts
	I0717 18:40:18.322208   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:18.322226   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:18.322309   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:18.322443   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:18.322457   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:18.322492   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:18.322579   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:18.322591   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:18.322621   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:18.322727   80857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-019549 san=[127.0.0.1 192.168.39.128 localhost minikube old-k8s-version-019549]
	I0717 18:40:18.397216   80857 provision.go:177] copyRemoteCerts
	I0717 18:40:18.397266   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:18.397301   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.399887   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400237   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.400286   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400531   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.400732   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.400880   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.401017   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.490677   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:18.518392   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 18:40:18.543930   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:18.567339   80857 provision.go:87] duration metric: took 252.250106ms to configureAuth
	I0717 18:40:18.567360   80857 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:18.567539   80857 config.go:182] Loaded profile config "old-k8s-version-019549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 18:40:18.567610   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.570373   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.570809   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570943   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.571140   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571281   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.571624   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.571841   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.571862   80857 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:18.845725   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:18.845752   80857 machine.go:97] duration metric: took 895.03234ms to provisionDockerMachine
	I0717 18:40:18.845765   80857 start.go:293] postStartSetup for "old-k8s-version-019549" (driver="kvm2")
	I0717 18:40:18.845778   80857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:18.845828   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:18.846158   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:18.846192   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.848760   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849264   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.849293   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.849649   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.849843   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.850007   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.938026   80857 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:18.943223   80857 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:18.943254   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:18.943317   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:18.943417   80857 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:18.943509   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:18.954887   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:18.976980   80857 start.go:296] duration metric: took 131.200877ms for postStartSetup
	I0717 18:40:18.977022   80857 fix.go:56] duration metric: took 16.727466541s for fixHost
	I0717 18:40:18.977041   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.980020   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980384   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.980417   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980533   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.980723   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.980903   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.981059   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.981207   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.981406   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.981418   80857 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:19.093409   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241619.063415252
	
	I0717 18:40:19.093433   80857 fix.go:216] guest clock: 1721241619.063415252
	I0717 18:40:19.093443   80857 fix.go:229] Guest: 2024-07-17 18:40:19.063415252 +0000 UTC Remote: 2024-07-17 18:40:18.97702579 +0000 UTC m=+213.960604949 (delta=86.389462ms)
	I0717 18:40:19.093494   80857 fix.go:200] guest clock delta is within tolerance: 86.389462ms
	I0717 18:40:19.093506   80857 start.go:83] releasing machines lock for "old-k8s-version-019549", held for 16.843984035s
	I0717 18:40:19.093543   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.093842   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:19.096443   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.096817   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.096848   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.097035   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097579   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097769   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097859   80857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:19.097915   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.098007   80857 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:19.098031   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.100775   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101108   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101160   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101185   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101412   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101595   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.101606   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101637   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101718   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.101789   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101853   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.101975   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.102092   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.102212   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.218596   80857 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:19.225675   80857 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:19.371453   80857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:19.381365   80857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:19.381438   80857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:19.397504   80857 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:19.397530   80857 start.go:495] detecting cgroup driver to use...
	I0717 18:40:19.397597   80857 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:19.412150   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:19.425495   80857 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:19.425578   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:19.438662   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:19.451953   80857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:19.578702   80857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:19.733328   80857 docker.go:233] disabling docker service ...
	I0717 18:40:19.733411   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:19.753615   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:19.774057   80857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:19.933901   80857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:20.049914   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:20.063500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:20.082560   80857 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 18:40:20.082611   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.092857   80857 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:20.092912   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.103283   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.112612   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.122671   80857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:20.132892   80857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:20.145445   80857 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:20.145501   80857 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:20.158958   80857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:20.168377   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:20.307224   80857 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:20.453407   80857 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:20.453490   80857 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:20.458007   80857 start.go:563] Will wait 60s for crictl version
	I0717 18:40:20.458062   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:20.461420   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:20.507358   80857 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:20.507426   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.542812   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.577280   80857 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 18:40:20.432028   80401 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.59597321s)
	I0717 18:40:20.432063   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.633854   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.728474   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.879989   80401 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:20.880079   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.380421   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.880208   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.912390   80401 api_server.go:72] duration metric: took 1.032400417s to wait for apiserver process to appear ...
	I0717 18:40:21.912419   80401 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:40:21.912443   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:21.912904   80401 api_server.go:269] stopped: https://192.168.72.216:8443/healthz: Get "https://192.168.72.216:8443/healthz": dial tcp 192.168.72.216:8443: connect: connection refused
	I0717 18:40:22.412598   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:20.397025   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting to get IP...
	I0717 18:40:20.398122   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.398525   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.398610   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.398506   81910 retry.go:31] will retry after 285.646022ms: waiting for machine to come up
	I0717 18:40:20.686556   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.687151   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.687263   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.687202   81910 retry.go:31] will retry after 239.996ms: waiting for machine to come up
	I0717 18:40:20.928604   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.929111   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.929139   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.929057   81910 retry.go:31] will retry after 487.674422ms: waiting for machine to come up
	I0717 18:40:21.418475   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.418928   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.418952   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:21.418872   81910 retry.go:31] will retry after 439.363216ms: waiting for machine to come up
	I0717 18:40:21.859546   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.860241   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.860273   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:21.860145   81910 retry.go:31] will retry after 598.922134ms: waiting for machine to come up
	I0717 18:40:22.461026   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:22.461509   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:22.461542   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:22.461457   81910 retry.go:31] will retry after 908.602286ms: waiting for machine to come up
	I0717 18:40:23.371582   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:23.372143   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:23.372170   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:23.372093   81910 retry.go:31] will retry after 893.690966ms: waiting for machine to come up
	I0717 18:40:24.267377   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:24.267908   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:24.267935   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:24.267873   81910 retry.go:31] will retry after 1.468061022s: waiting for machine to come up
	I0717 18:40:20.578679   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:20.581569   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.581933   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:20.581961   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.582197   80857 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:20.586047   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:20.598137   80857 kubeadm.go:883] updating cluster {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:20.598284   80857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:40:20.598355   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:20.646681   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:20.646757   80857 ssh_runner.go:195] Run: which lz4
	I0717 18:40:20.650691   80857 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:20.654703   80857 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:20.654730   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 18:40:22.163706   80857 crio.go:462] duration metric: took 1.513040695s to copy over tarball
	I0717 18:40:22.163783   80857 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:40:24.904256   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:24.904292   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:24.904308   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:24.971088   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:24.971120   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:24.971136   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.015832   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.015868   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:25.413309   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.418927   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.418955   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:25.913026   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.917375   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.917407   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:26.412566   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:26.419115   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:26.419140   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:26.912680   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:26.920245   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:26.920268   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:27.412854   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:27.417356   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:27.417390   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:27.912883   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:27.918242   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:27.918274   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:28.412591   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:28.419257   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:40:28.427814   80401 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:40:28.427842   80401 api_server.go:131] duration metric: took 6.515416451s to wait for apiserver health ...
	I0717 18:40:28.427854   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:40:28.427863   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:28.429828   80401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:40:28.431012   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:40:28.444822   80401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:40:28.465212   80401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:40:28.477639   80401 system_pods.go:59] 8 kube-system pods found
	I0717 18:40:28.477691   80401 system_pods.go:61] "coredns-5cfdc65f69-spj2w" [6849b651-9346-4d96-97a7-88eca7bbd50a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:40:28.477706   80401 system_pods.go:61] "etcd-no-preload-066175" [be012488-220b-421d-bf16-a3623fafb8fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:40:28.477721   80401 system_pods.go:61] "kube-apiserver-no-preload-066175" [4292a786-61f3-405d-8784-ec8a58e1b124] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:40:28.477731   80401 system_pods.go:61] "kube-controller-manager-no-preload-066175" [937a48f4-7fca-4cee-bb50-51f1720960da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:40:28.477739   80401 system_pods.go:61] "kube-proxy-tn5xn" [f0a910b3-98b6-470f-a5a2-e49369ecb733] Running
	I0717 18:40:28.477748   80401 system_pods.go:61] "kube-scheduler-no-preload-066175" [ffa2475c-7a5a-4988-89a2-4727e07356cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:40:28.477756   80401 system_pods.go:61] "metrics-server-78fcd8795b-mbtvd" [ccd7a565-52ef-49be-b659-31ae20af537a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:40:28.477761   80401 system_pods.go:61] "storage-provisioner" [19914ecc-2fcc-4cb8-bd78-fb6891dcf85d] Running
	I0717 18:40:28.477769   80401 system_pods.go:74] duration metric: took 12.536267ms to wait for pod list to return data ...
	I0717 18:40:28.477777   80401 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:40:28.482322   80401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:40:28.482348   80401 node_conditions.go:123] node cpu capacity is 2
	I0717 18:40:28.482368   80401 node_conditions.go:105] duration metric: took 4.585233ms to run NodePressure ...
	I0717 18:40:28.482387   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:28.768656   80401 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:40:28.773308   80401 kubeadm.go:739] kubelet initialised
	I0717 18:40:28.773330   80401 kubeadm.go:740] duration metric: took 4.654448ms waiting for restarted kubelet to initialise ...
	I0717 18:40:28.773338   80401 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:40:28.778778   80401 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:25.738071   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:25.738580   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:25.738611   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:25.738538   81910 retry.go:31] will retry after 1.505740804s: waiting for machine to come up
	I0717 18:40:27.246293   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:27.246651   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:27.246674   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:27.246606   81910 retry.go:31] will retry after 1.574253799s: waiting for machine to come up
	I0717 18:40:28.822159   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:28.822546   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:28.822597   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:28.822517   81910 retry.go:31] will retry after 2.132842884s: waiting for machine to come up
	I0717 18:40:25.307875   80857 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.144060111s)
	I0717 18:40:25.307903   80857 crio.go:469] duration metric: took 3.144169984s to extract the tarball
	I0717 18:40:25.307914   80857 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:40:25.354436   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:25.404799   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:25.404827   80857 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:40:25.404884   80857 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.404936   80857 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 18:40:25.404908   80857 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.404952   80857 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.404998   80857 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.405010   80857 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.406661   80857 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.406667   80857 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 18:40:25.406690   80857 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.407119   80857 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.619950   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 18:40:25.635075   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.641561   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.647362   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.648054   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.649684   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.664183   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.709163   80857 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 18:40:25.709227   80857 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 18:40:25.709275   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.760931   80857 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 18:40:25.760994   80857 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.761042   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.779324   80857 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 18:40:25.779378   80857 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.779429   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799052   80857 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 18:40:25.799097   80857 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.799106   80857 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 18:40:25.799131   80857 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 18:40:25.799190   80857 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.799233   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799136   80857 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.799148   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799298   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.806973   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 18:40:25.807041   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.807066   80857 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 18:40:25.807095   80857 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.807126   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.807137   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.807237   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.811025   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.811114   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.935792   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 18:40:25.935853   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 18:40:25.935863   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 18:40:25.935934   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.935973   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 18:40:25.935996   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 18:40:25.940351   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 18:40:25.970107   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 18:40:26.231894   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:26.372230   80857 cache_images.go:92] duration metric: took 967.383323ms to LoadCachedImages
	W0717 18:40:26.372327   80857 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0717 18:40:26.372346   80857 kubeadm.go:934] updating node { 192.168.39.128 8443 v1.20.0 crio true true} ...
	I0717 18:40:26.372517   80857 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-019549 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:26.372613   80857 ssh_runner.go:195] Run: crio config
	I0717 18:40:26.416155   80857 cni.go:84] Creating CNI manager for ""
	I0717 18:40:26.416181   80857 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:26.416196   80857 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:26.416229   80857 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-019549 NodeName:old-k8s-version-019549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 18:40:26.416526   80857 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-019549"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:26.416595   80857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 18:40:26.426941   80857 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:26.427006   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:26.437810   80857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 18:40:26.460046   80857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:40:26.482521   80857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 18:40:26.502536   80857 ssh_runner.go:195] Run: grep 192.168.39.128	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:26.506513   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:26.520895   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:26.648931   80857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:26.665278   80857 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549 for IP: 192.168.39.128
	I0717 18:40:26.665300   80857 certs.go:194] generating shared ca certs ...
	I0717 18:40:26.665329   80857 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:26.665508   80857 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:26.665561   80857 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:26.665574   80857 certs.go:256] generating profile certs ...
	I0717 18:40:26.665693   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/client.key
	I0717 18:40:26.665780   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key.9c9b0a7e
	I0717 18:40:26.665836   80857 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key
	I0717 18:40:26.665998   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:26.666049   80857 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:26.666063   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:26.666095   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:26.666128   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:26.666167   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:26.666225   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:26.667047   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:26.713984   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:26.742617   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:26.770441   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:26.795098   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 18:40:26.825038   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:26.861300   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:26.901664   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:40:26.926357   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:26.948986   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:26.973248   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:26.994642   80857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:27.010158   80857 ssh_runner.go:195] Run: openssl version
	I0717 18:40:27.015861   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:27.026221   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030496   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030567   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.035862   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:27.046312   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:27.057117   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061775   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061824   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.067535   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:27.079022   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:27.090009   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094688   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094768   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.100404   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:27.110653   80857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:27.115117   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:27.120633   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:27.126070   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:27.131500   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:27.137035   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:27.142426   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:27.147638   80857 kubeadm.go:392] StartCluster: {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:27.147756   80857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:27.147816   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.187433   80857 cri.go:89] found id: ""
	I0717 18:40:27.187498   80857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:27.197001   80857 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:27.197020   80857 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:27.197070   80857 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:27.206758   80857 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:27.207822   80857 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-019549" does not appear in /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:40:27.208505   80857 kubeconfig.go:62] /home/jenkins/minikube-integration/19283-14386/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-019549" cluster setting kubeconfig missing "old-k8s-version-019549" context setting]
	I0717 18:40:27.209497   80857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:27.212786   80857 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:27.222612   80857 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.128
	I0717 18:40:27.222649   80857 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:27.222663   80857 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:27.222721   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.268127   80857 cri.go:89] found id: ""
	I0717 18:40:27.268205   80857 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:27.284334   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:27.293669   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:27.293691   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:27.293743   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:40:27.305348   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:27.305437   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:27.317749   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:40:27.328481   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:27.328547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:27.337574   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.346242   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:27.346299   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.354946   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:40:27.363296   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:27.363350   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:27.371925   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:27.384020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:27.571539   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:28.767574   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.19599736s)
	I0717 18:40:28.767612   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.011512   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.151980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.258796   80857 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:29.258886   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:29.759072   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.787614   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:33.285208   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:30.956634   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:30.957109   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:30.957140   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:30.957059   81910 retry.go:31] will retry after 3.31337478s: waiting for machine to come up
	I0717 18:40:34.272528   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:34.273063   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:34.273094   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:34.273032   81910 retry.go:31] will retry after 3.207729964s: waiting for machine to come up
	I0717 18:40:30.259921   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.758948   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.258967   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.759872   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.259187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.759299   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.259080   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.759583   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.259740   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.759068   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.697183   80180 start.go:364] duration metric: took 48.129837953s to acquireMachinesLock for "embed-certs-527415"
	I0717 18:40:38.697248   80180 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:38.697260   80180 fix.go:54] fixHost starting: 
	I0717 18:40:38.697680   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:38.697712   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:38.713575   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0717 18:40:38.713926   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:38.714396   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:40:38.714422   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:38.714762   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:38.714949   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:38.715109   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:40:38.716552   80180 fix.go:112] recreateIfNeeded on embed-certs-527415: state=Stopped err=<nil>
	I0717 18:40:38.716574   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	W0717 18:40:38.716775   80180 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:38.718610   80180 out.go:177] * Restarting existing kvm2 VM for "embed-certs-527415" ...
	I0717 18:40:35.285888   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:36.285651   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:36.285676   80401 pod_ready.go:81] duration metric: took 7.506876819s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.285686   80401 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.292615   80401 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:36.292638   80401 pod_ready.go:81] duration metric: took 6.944487ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.292650   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:38.298338   80401 pod_ready.go:102] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:37.484312   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.484723   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has current primary IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.484740   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Found IP for machine: 192.168.50.245
	I0717 18:40:37.484753   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Reserving static IP address...
	I0717 18:40:37.485137   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-022930", mac: "52:54:00:5d:76:ae", ip: "192.168.50.245"} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.485161   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Reserved static IP address: 192.168.50.245
	I0717 18:40:37.485174   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | skip adding static IP to network mk-default-k8s-diff-port-022930 - found existing host DHCP lease matching {name: "default-k8s-diff-port-022930", mac: "52:54:00:5d:76:ae", ip: "192.168.50.245"}
	I0717 18:40:37.485191   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Getting to WaitForSSH function...
	I0717 18:40:37.485207   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for SSH to be available...
	I0717 18:40:37.487397   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.487767   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.487796   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.487899   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Using SSH client type: external
	I0717 18:40:37.487927   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa (-rw-------)
	I0717 18:40:37.487961   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:37.487973   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | About to run SSH command:
	I0717 18:40:37.487992   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | exit 0
	I0717 18:40:37.608746   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:37.609085   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetConfigRaw
	I0717 18:40:37.609739   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:37.612293   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.612668   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.612689   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.612936   81068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/config.json ...
	I0717 18:40:37.613176   81068 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:37.613194   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:37.613391   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.615483   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.615774   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.615804   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.615881   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.616038   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.616187   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.616306   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.616470   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.616676   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.616691   81068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:37.720971   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:37.721004   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.721307   81068 buildroot.go:166] provisioning hostname "default-k8s-diff-port-022930"
	I0717 18:40:37.721340   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.721654   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.724162   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.724507   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.724535   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.724712   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.724912   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.725090   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.725259   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.725430   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.725635   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.725651   81068 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-022930 && echo "default-k8s-diff-port-022930" | sudo tee /etc/hostname
	I0717 18:40:37.837366   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-022930
	
	I0717 18:40:37.837389   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.839920   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.840291   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.840325   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.840450   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.840654   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.840830   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.840970   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.841130   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.841344   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.841363   81068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-022930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-022930/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-022930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:37.948311   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:37.948343   81068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:37.948394   81068 buildroot.go:174] setting up certificates
	I0717 18:40:37.948406   81068 provision.go:84] configureAuth start
	I0717 18:40:37.948416   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.948732   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:37.951214   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.951548   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.951578   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.951693   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.953805   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.954086   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.954105   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.954250   81068 provision.go:143] copyHostCerts
	I0717 18:40:37.954318   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:37.954334   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:37.954401   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:37.954531   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:37.954542   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:37.954575   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:37.954657   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:37.954667   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:37.954694   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:37.954758   81068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-022930 san=[127.0.0.1 192.168.50.245 default-k8s-diff-port-022930 localhost minikube]
	I0717 18:40:38.054084   81068 provision.go:177] copyRemoteCerts
	I0717 18:40:38.054136   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:38.054160   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.056841   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.057265   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.057300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.057483   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.057683   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.057839   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.057982   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.138206   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:38.163105   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 18:40:38.188449   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:38.214829   81068 provision.go:87] duration metric: took 266.409028ms to configureAuth
	I0717 18:40:38.214853   81068 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:38.215005   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:40:38.215068   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.217684   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.218010   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.218037   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.218247   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.218419   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.218573   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.218706   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.218874   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:38.219021   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:38.219039   81068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:38.471162   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:38.471191   81068 machine.go:97] duration metric: took 858.000457ms to provisionDockerMachine
	I0717 18:40:38.471206   81068 start.go:293] postStartSetup for "default-k8s-diff-port-022930" (driver="kvm2")
	I0717 18:40:38.471220   81068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:38.471247   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.471558   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:38.471590   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.474241   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.474673   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.474704   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.474868   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.475085   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.475245   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.475524   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.554800   81068 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:38.558601   81068 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:38.558624   81068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:38.558685   81068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:38.558769   81068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:38.558875   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:38.567664   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:38.589713   81068 start.go:296] duration metric: took 118.491854ms for postStartSetup
	I0717 18:40:38.589754   81068 fix.go:56] duration metric: took 19.496049651s for fixHost
	I0717 18:40:38.589777   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.592433   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.592813   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.592860   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.592989   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.593188   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.593368   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.593536   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.593738   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:38.593937   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:38.593955   81068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:38.697050   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241638.669121206
	
	I0717 18:40:38.697075   81068 fix.go:216] guest clock: 1721241638.669121206
	I0717 18:40:38.697085   81068 fix.go:229] Guest: 2024-07-17 18:40:38.669121206 +0000 UTC Remote: 2024-07-17 18:40:38.589759024 +0000 UTC m=+204.149894792 (delta=79.362182ms)
	I0717 18:40:38.697108   81068 fix.go:200] guest clock delta is within tolerance: 79.362182ms
	I0717 18:40:38.697118   81068 start.go:83] releasing machines lock for "default-k8s-diff-port-022930", held for 19.603450588s
	I0717 18:40:38.697143   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.697381   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:38.700059   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.700504   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.700529   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.700764   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701246   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701541   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701619   81068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:38.701672   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.701777   81068 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:38.701797   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.704169   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704478   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.704503   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704657   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704684   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.704849   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.705002   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.705164   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.705262   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.705300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.705496   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.705663   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.705817   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.705967   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.825607   81068 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:38.831484   81068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:38.972775   81068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:38.978446   81068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:38.978502   81068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:38.999160   81068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:38.999180   81068 start.go:495] detecting cgroup driver to use...
	I0717 18:40:38.999234   81068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:39.016133   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:39.029031   81068 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:39.029083   81068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:39.042835   81068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:39.056981   81068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:39.168521   81068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:39.306630   81068 docker.go:233] disabling docker service ...
	I0717 18:40:39.306704   81068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:39.320435   81068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:39.337780   81068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:35.259643   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:35.759432   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.259818   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.759627   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.259968   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.758933   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.259980   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.759776   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.259988   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.496847   81068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:39.627783   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:39.641684   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:39.659183   81068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:40:39.659250   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.669034   81068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:39.669100   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.678708   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.688822   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.699484   81068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:39.709505   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.720715   81068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.736510   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.746991   81068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:39.757265   81068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:39.757320   81068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:39.774777   81068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:39.789593   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:39.907377   81068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:40.039498   81068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:40.039592   81068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:40.044502   81068 start.go:563] Will wait 60s for crictl version
	I0717 18:40:40.044558   81068 ssh_runner.go:195] Run: which crictl
	I0717 18:40:40.048708   81068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:40.087738   81068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:40.087822   81068 ssh_runner.go:195] Run: crio --version
	I0717 18:40:40.115460   81068 ssh_runner.go:195] Run: crio --version
	I0717 18:40:40.150181   81068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:40:38.719828   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Start
	I0717 18:40:38.720004   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring networks are active...
	I0717 18:40:38.720983   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring network default is active
	I0717 18:40:38.721537   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring network mk-embed-certs-527415 is active
	I0717 18:40:38.721945   80180 main.go:141] libmachine: (embed-certs-527415) Getting domain xml...
	I0717 18:40:38.722654   80180 main.go:141] libmachine: (embed-certs-527415) Creating domain...
	I0717 18:40:40.007036   80180 main.go:141] libmachine: (embed-certs-527415) Waiting to get IP...
	I0717 18:40:40.007975   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.008511   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.008608   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.008495   82069 retry.go:31] will retry after 268.334211ms: waiting for machine to come up
	I0717 18:40:40.278129   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.278639   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.278670   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.278585   82069 retry.go:31] will retry after 350.00147ms: waiting for machine to come up
	I0717 18:40:40.630229   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.630819   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.630853   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.630768   82069 retry.go:31] will retry after 411.079615ms: waiting for machine to come up
	I0717 18:40:41.043232   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.043851   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.043880   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.043822   82069 retry.go:31] will retry after 387.726284ms: waiting for machine to come up
	I0717 18:40:41.433536   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.434058   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.434092   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.434005   82069 retry.go:31] will retry after 538.564385ms: waiting for machine to come up
	I0717 18:40:41.973917   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.974457   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.974489   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.974395   82069 retry.go:31] will retry after 778.576616ms: waiting for machine to come up
	I0717 18:40:42.754322   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:42.754872   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:42.754899   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:42.754837   82069 retry.go:31] will retry after 758.957234ms: waiting for machine to come up
	I0717 18:40:40.299673   80401 pod_ready.go:102] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:40.801297   80401 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.801325   80401 pod_ready.go:81] duration metric: took 4.508666316s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.801339   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.807354   80401 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.807372   80401 pod_ready.go:81] duration metric: took 6.024916ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.807380   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.812934   80401 pod_ready.go:92] pod "kube-proxy-tn5xn" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.812982   80401 pod_ready.go:81] duration metric: took 5.594378ms for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.812996   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.817940   80401 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.817969   80401 pod_ready.go:81] duration metric: took 4.96427ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.817982   80401 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:42.825018   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:40.151220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:40.153791   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:40.154220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:40.154246   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:40.154472   81068 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:40.159310   81068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:40.172121   81068 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:40.172256   81068 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:40:40.172307   81068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:40.215863   81068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:40:40.215940   81068 ssh_runner.go:195] Run: which lz4
	I0717 18:40:40.220502   81068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:40.224682   81068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:40.224714   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:40:41.511505   81068 crio.go:462] duration metric: took 1.291039238s to copy over tarball
	I0717 18:40:41.511574   81068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:40:43.730839   81068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.219230444s)
	I0717 18:40:43.730901   81068 crio.go:469] duration metric: took 2.219370372s to extract the tarball
	I0717 18:40:43.730912   81068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:40:43.767876   81068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:43.809466   81068 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:40:43.809494   81068 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:40:43.809505   81068 kubeadm.go:934] updating node { 192.168.50.245 8444 v1.30.2 crio true true} ...
	I0717 18:40:43.809646   81068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-022930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:43.809740   81068 ssh_runner.go:195] Run: crio config
	I0717 18:40:43.850614   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:40:43.850635   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:43.850648   81068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:43.850669   81068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-022930 NodeName:default-k8s-diff-port-022930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:40:43.850795   81068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-022930"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:43.850851   81068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:40:43.862674   81068 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:43.862733   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:43.873304   81068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 18:40:43.888884   81068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:40:43.903631   81068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 18:40:43.918768   81068 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:43.922033   81068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:43.932546   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:44.049621   81068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:44.065718   81068 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930 for IP: 192.168.50.245
	I0717 18:40:44.065747   81068 certs.go:194] generating shared ca certs ...
	I0717 18:40:44.065767   81068 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:44.065939   81068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:44.065999   81068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:44.066016   81068 certs.go:256] generating profile certs ...
	I0717 18:40:44.066149   81068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/client.key
	I0717 18:40:44.066224   81068 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.key.8aa7f0a0
	I0717 18:40:44.066284   81068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.key
	I0717 18:40:44.066445   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:44.066494   81068 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:44.066507   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:44.066548   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:44.066579   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:44.066606   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:44.066650   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:44.067421   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:44.104160   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:44.133716   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:44.161170   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:44.190489   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 18:40:44.211792   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:44.232875   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:44.255059   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:40:44.276826   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:44.298357   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:44.320634   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:44.345428   81068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:44.362934   81068 ssh_runner.go:195] Run: openssl version
	I0717 18:40:44.369764   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:44.382557   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.386445   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.386483   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.392033   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:44.401987   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:44.411437   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.415367   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.415419   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.420523   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:44.429915   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:44.439371   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.443248   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.443301   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.448380   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:44.457828   81068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:44.462151   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:44.467474   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:44.472829   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:40.259910   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:40.759917   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.259718   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.759839   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.259129   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.759772   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.259989   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.759724   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.258978   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.759594   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.515097   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:43.515595   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:43.515616   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:43.515539   82069 retry.go:31] will retry after 1.173590835s: waiting for machine to come up
	I0717 18:40:44.691027   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:44.691479   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:44.691520   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:44.691428   82069 retry.go:31] will retry after 1.594704966s: waiting for machine to come up
	I0717 18:40:46.288022   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:46.288609   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:46.288642   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:46.288549   82069 retry.go:31] will retry after 2.014912325s: waiting for machine to come up
	I0717 18:40:45.323815   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:47.324715   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:44.478397   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:44.483860   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:44.489029   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:44.494220   81068 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:44.494329   81068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:44.494381   81068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:44.534380   81068 cri.go:89] found id: ""
	I0717 18:40:44.534445   81068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:44.545270   81068 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:44.545287   81068 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:44.545328   81068 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:44.555521   81068 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:44.556584   81068 kubeconfig.go:125] found "default-k8s-diff-port-022930" server: "https://192.168.50.245:8444"
	I0717 18:40:44.558675   81068 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:44.567696   81068 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.245
	I0717 18:40:44.567727   81068 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:44.567739   81068 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:44.567787   81068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:44.605757   81068 cri.go:89] found id: ""
	I0717 18:40:44.605833   81068 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:44.622187   81068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:44.631169   81068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:44.631191   81068 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:44.631241   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 18:40:44.639194   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:44.639248   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:44.647542   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 18:40:44.655622   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:44.655708   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:44.663923   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 18:40:44.671733   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:44.671778   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:44.680375   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 18:40:44.688043   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:44.688085   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:44.697020   81068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:44.705554   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:44.812051   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.351683   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.559471   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.618086   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.678836   81068 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:45.678926   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.179998   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.679083   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.179084   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.679042   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.179150   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.195192   81068 api_server.go:72] duration metric: took 2.516354411s to wait for apiserver process to appear ...
	I0717 18:40:48.195222   81068 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:40:48.195247   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:45.259185   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:45.759765   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.259009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.759131   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.259477   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.759386   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.259977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.759374   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.259744   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.759440   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.393650   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:50.393688   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:50.393705   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:50.467974   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:50.468000   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:50.696340   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:50.702264   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:50.702308   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:51.195503   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:51.200034   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:51.200060   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:51.695594   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:51.699593   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 200:
	ok
	I0717 18:40:51.706025   81068 api_server.go:141] control plane version: v1.30.2
	I0717 18:40:51.706048   81068 api_server.go:131] duration metric: took 3.510818337s to wait for apiserver health ...
	I0717 18:40:51.706059   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:40:51.706067   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:51.707696   81068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:40:48.305798   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:48.306290   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:48.306323   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:48.306232   82069 retry.go:31] will retry after 1.789943402s: waiting for machine to come up
	I0717 18:40:50.098279   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:50.098771   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:50.098798   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:50.098734   82069 retry.go:31] will retry after 2.765766483s: waiting for machine to come up
	I0717 18:40:52.867667   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:52.868191   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:52.868212   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:52.868139   82069 retry.go:31] will retry after 2.762670644s: waiting for machine to come up
	I0717 18:40:49.325415   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:51.824015   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:53.824980   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:51.708887   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:40:51.718704   81068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:40:51.735711   81068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:40:51.745976   81068 system_pods.go:59] 8 kube-system pods found
	I0717 18:40:51.746009   81068 system_pods.go:61] "coredns-7db6d8ff4d-czk4x" [80cedf0b-248a-458e-994c-81f852d78076] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:40:51.746022   81068 system_pods.go:61] "etcd-default-k8s-diff-port-022930" [f9cf97bf-5fdc-4623-a78c-d29e0352ce40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:40:51.746036   81068 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-022930" [599cef4d-2b4d-4cd5-9552-99de585759eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:40:51.746051   81068 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-022930" [89092470-6fc9-47b2-b680-7c93945d9005] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:40:51.746062   81068 system_pods.go:61] "kube-proxy-hj7ss" [d260f18e-7a01-4f07-8c6a-87e8f6329f79] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 18:40:51.746074   81068 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-022930" [fe098478-fcb6-4084-b773-11c2cbb995aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:40:51.746083   81068 system_pods.go:61] "metrics-server-569cc877fc-j9qhx" [18efb008-e7d3-435e-9156-57c16b454d07] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:40:51.746093   81068 system_pods.go:61] "storage-provisioner" [ac856758-62ca-485f-aa31-5cd1c7d1dbe5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:40:51.746103   81068 system_pods.go:74] duration metric: took 10.373616ms to wait for pod list to return data ...
	I0717 18:40:51.746115   81068 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:40:51.749151   81068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:40:51.749173   81068 node_conditions.go:123] node cpu capacity is 2
	I0717 18:40:51.749185   81068 node_conditions.go:105] duration metric: took 3.061813ms to run NodePressure ...
	I0717 18:40:51.749204   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:52.049486   81068 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:40:52.053636   81068 kubeadm.go:739] kubelet initialised
	I0717 18:40:52.053656   81068 kubeadm.go:740] duration metric: took 4.136528ms waiting for restarted kubelet to initialise ...
	I0717 18:40:52.053665   81068 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:40:52.058401   81068 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.062406   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.062429   81068 pod_ready.go:81] duration metric: took 4.007504ms for pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.062439   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.062454   81068 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.066161   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.066185   81068 pod_ready.go:81] duration metric: took 3.717781ms for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.066202   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.066212   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.070043   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.070064   81068 pod_ready.go:81] duration metric: took 3.840533ms for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.070074   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.070080   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:54.077110   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:50.258977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.259867   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.759826   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.259016   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.759708   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.259589   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.759788   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.259753   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.759841   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.633531   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.633999   80180 main.go:141] libmachine: (embed-certs-527415) Found IP for machine: 192.168.61.90
	I0717 18:40:55.634014   80180 main.go:141] libmachine: (embed-certs-527415) Reserving static IP address...
	I0717 18:40:55.634026   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has current primary IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.634407   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.634438   80180 main.go:141] libmachine: (embed-certs-527415) Reserved static IP address: 192.168.61.90
	I0717 18:40:55.634456   80180 main.go:141] libmachine: (embed-certs-527415) DBG | skip adding static IP to network mk-embed-certs-527415 - found existing host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"}
	I0717 18:40:55.634476   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Getting to WaitForSSH function...
	I0717 18:40:55.634490   80180 main.go:141] libmachine: (embed-certs-527415) Waiting for SSH to be available...
	I0717 18:40:55.636604   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.636877   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.636904   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.637010   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH client type: external
	I0717 18:40:55.637032   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa (-rw-------)
	I0717 18:40:55.637063   80180 main.go:141] libmachine: (embed-certs-527415) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:55.637082   80180 main.go:141] libmachine: (embed-certs-527415) DBG | About to run SSH command:
	I0717 18:40:55.637094   80180 main.go:141] libmachine: (embed-certs-527415) DBG | exit 0
	I0717 18:40:55.765208   80180 main.go:141] libmachine: (embed-certs-527415) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:55.765554   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetConfigRaw
	I0717 18:40:55.766322   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:55.769331   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.769800   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.769827   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.770203   80180 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/config.json ...
	I0717 18:40:55.770593   80180 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:55.770620   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:55.770826   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:55.773837   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.774313   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.774346   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.774553   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:55.774750   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.774909   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.775060   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:55.775277   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:55.775534   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:55.775556   80180 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:55.888982   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:55.889013   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:55.889259   80180 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:40:55.889286   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:55.889501   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:55.891900   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.892284   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.892302   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.892532   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:55.892701   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.892853   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.892993   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:55.893136   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:55.893293   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:55.893310   80180 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-527415 && echo "embed-certs-527415" | sudo tee /etc/hostname
	I0717 18:40:56.018869   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-527415
	
	I0717 18:40:56.018898   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.021591   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.021888   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.021909   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.022286   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.022489   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.022646   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.022765   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.022905   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.023050   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.023066   80180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-527415' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-527415/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-527415' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:56.146411   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:56.146455   80180 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:56.146478   80180 buildroot.go:174] setting up certificates
	I0717 18:40:56.146490   80180 provision.go:84] configureAuth start
	I0717 18:40:56.146502   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:56.146767   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:56.149369   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.149725   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.149755   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.149937   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.152431   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.152753   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.152774   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.152936   80180 provision.go:143] copyHostCerts
	I0717 18:40:56.153028   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:56.153041   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:56.153096   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:56.153186   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:56.153194   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:56.153214   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:56.153277   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:56.153283   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:56.153300   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:56.153349   80180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.embed-certs-527415 san=[127.0.0.1 192.168.61.90 embed-certs-527415 localhost minikube]
	I0717 18:40:56.326978   80180 provision.go:177] copyRemoteCerts
	I0717 18:40:56.327024   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:56.327045   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.329432   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.329778   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.329809   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.329927   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.330121   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.330295   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.330409   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.415173   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:56.438501   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 18:40:56.460520   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:56.481808   80180 provision.go:87] duration metric: took 335.305142ms to configureAuth
	I0717 18:40:56.481832   80180 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:56.482001   80180 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:40:56.482063   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.484653   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.485044   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.485074   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.485222   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.485468   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.485652   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.485810   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.485953   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.486108   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.486123   80180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:56.741135   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:56.741185   80180 machine.go:97] duration metric: took 970.573336ms to provisionDockerMachine
	I0717 18:40:56.741204   80180 start.go:293] postStartSetup for "embed-certs-527415" (driver="kvm2")
	I0717 18:40:56.741221   80180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:56.741245   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.741597   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:56.741625   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.744356   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.744805   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.744831   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.745025   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.745224   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.745382   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.745549   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.835435   80180 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:56.839724   80180 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:56.839753   80180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:56.839834   80180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:56.839945   80180 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:56.840083   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:56.849582   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:56.872278   80180 start.go:296] duration metric: took 131.057656ms for postStartSetup
	I0717 18:40:56.872347   80180 fix.go:56] duration metric: took 18.175085798s for fixHost
	I0717 18:40:56.872375   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.874969   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.875308   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.875340   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.875533   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.875722   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.875955   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.876089   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.876274   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.876459   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.876469   80180 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:56.985888   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241656.959508652
	
	I0717 18:40:56.985907   80180 fix.go:216] guest clock: 1721241656.959508652
	I0717 18:40:56.985914   80180 fix.go:229] Guest: 2024-07-17 18:40:56.959508652 +0000 UTC Remote: 2024-07-17 18:40:56.872354453 +0000 UTC m=+348.896679896 (delta=87.154199ms)
	I0717 18:40:56.985939   80180 fix.go:200] guest clock delta is within tolerance: 87.154199ms
	I0717 18:40:56.985944   80180 start.go:83] releasing machines lock for "embed-certs-527415", held for 18.288718042s
	I0717 18:40:56.985964   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.986210   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:56.988716   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.989086   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.989114   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.989279   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.989786   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.989966   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.990055   80180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:56.990092   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.990360   80180 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:56.990390   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.992519   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992816   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.992835   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992852   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992984   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.993162   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.993212   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.993234   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.993356   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.993401   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.993499   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.993541   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.993754   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.993915   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:57.116598   80180 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:57.122546   80180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:57.268379   80180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:57.274748   80180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:57.274819   80180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:57.290374   80180 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:57.290394   80180 start.go:495] detecting cgroup driver to use...
	I0717 18:40:57.290443   80180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:57.307521   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:57.323478   80180 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:57.323554   80180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:57.337078   80180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:57.350181   80180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:57.463512   80180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:57.626650   80180 docker.go:233] disabling docker service ...
	I0717 18:40:57.626714   80180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:57.641067   80180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:57.655085   80180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:57.802789   80180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:57.919140   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:57.932620   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:57.949471   80180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:40:57.949528   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.960297   80180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:57.960366   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.970890   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.980768   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.990723   80180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:58.000791   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.010332   80180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.026611   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.036106   80180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:58.044742   80180 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:58.044791   80180 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:58.056584   80180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:58.065470   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:58.182119   80180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:58.319330   80180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:58.319400   80180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:58.326361   80180 start.go:563] Will wait 60s for crictl version
	I0717 18:40:58.326405   80180 ssh_runner.go:195] Run: which crictl
	I0717 18:40:58.329951   80180 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:58.366561   80180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:58.366668   80180 ssh_runner.go:195] Run: crio --version
	I0717 18:40:58.398483   80180 ssh_runner.go:195] Run: crio --version
	I0717 18:40:58.427421   80180 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:40:56.324834   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:58.325283   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:56.077315   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:58.077815   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:55.259450   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.759932   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.259395   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.759855   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.259739   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.759436   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.258951   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.759931   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.259588   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.759651   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.428872   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:58.431182   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:58.431554   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:58.431580   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:58.431756   80180 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:58.435914   80180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:58.448777   80180 kubeadm.go:883] updating cluster {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:58.448923   80180 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:40:58.449018   80180 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:58.488011   80180 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:40:58.488077   80180 ssh_runner.go:195] Run: which lz4
	I0717 18:40:58.491828   80180 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:58.495609   80180 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:58.495640   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:40:59.686445   80180 crio.go:462] duration metric: took 1.194619366s to copy over tarball
	I0717 18:40:59.686513   80180 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:41:01.862679   80180 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.176132338s)
	I0717 18:41:01.862710   80180 crio.go:469] duration metric: took 2.176236509s to extract the tarball
	I0717 18:41:01.862719   80180 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:41:01.901813   80180 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:41:01.945403   80180 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:41:01.945429   80180 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:41:01.945438   80180 kubeadm.go:934] updating node { 192.168.61.90 8443 v1.30.2 crio true true} ...
	I0717 18:41:01.945554   80180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-527415 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:41:01.945631   80180 ssh_runner.go:195] Run: crio config
	I0717 18:41:01.991102   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:41:01.991130   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:41:01.991144   80180 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:41:01.991168   80180 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.90 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-527415 NodeName:embed-certs-527415 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:41:01.991331   80180 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-527415"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:41:01.991397   80180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:41:02.001007   80180 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:41:02.001082   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:41:02.010130   80180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0717 18:41:02.025405   80180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:41:02.041167   80180 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0717 18:41:02.057441   80180 ssh_runner.go:195] Run: grep 192.168.61.90	control-plane.minikube.internal$ /etc/hosts
	I0717 18:41:02.060878   80180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:41:02.072984   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:41:02.188194   80180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:41:02.204599   80180 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415 for IP: 192.168.61.90
	I0717 18:41:02.204623   80180 certs.go:194] generating shared ca certs ...
	I0717 18:41:02.204643   80180 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:41:02.204822   80180 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:41:02.204885   80180 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:41:02.204899   80180 certs.go:256] generating profile certs ...
	I0717 18:41:02.205047   80180 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.key
	I0717 18:41:02.205129   80180 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9
	I0717 18:41:02.205188   80180 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key
	I0717 18:41:02.205372   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:41:02.205436   80180 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:41:02.205451   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:41:02.205486   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:41:02.205526   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:41:02.205556   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:41:02.205612   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:41:02.206441   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:41:02.234135   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:41:02.259780   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:41:02.285464   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:41:02.316267   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 18:41:02.348835   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:41:02.375505   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:41:02.402683   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:41:02.426689   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:41:02.449328   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:41:02.472140   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:41:02.494016   80180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:41:02.512612   80180 ssh_runner.go:195] Run: openssl version
	I0717 18:41:02.519908   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:41:02.532706   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.538136   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.538191   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.545493   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:41:02.558832   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:41:02.570455   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.575515   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.575582   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.581428   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:41:02.592439   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:41:02.602823   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.608370   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.608433   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.615367   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:41:02.628355   80180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:41:02.632772   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:41:02.638325   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:41:02.643635   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:41:02.648960   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:41:02.654088   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:41:02.659220   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:41:02.664325   80180 kubeadm.go:392] StartCluster: {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:41:02.664444   80180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:41:02.664495   80180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:41:02.699590   80180 cri.go:89] found id: ""
	I0717 18:41:02.699676   80180 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:41:02.709427   80180 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:41:02.709452   80180 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:41:02.709503   80180 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:41:02.718489   80180 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:41:02.719505   80180 kubeconfig.go:125] found "embed-certs-527415" server: "https://192.168.61.90:8443"
	I0717 18:41:02.721457   80180 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:41:02.730258   80180 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.90
	I0717 18:41:02.730288   80180 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:41:02.730301   80180 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:41:02.730367   80180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:41:02.768268   80180 cri.go:89] found id: ""
	I0717 18:41:02.768339   80180 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:41:02.786699   80180 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:41:02.796888   80180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:41:02.796912   80180 kubeadm.go:157] found existing configuration files:
	
	I0717 18:41:02.796965   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:41:02.805633   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:41:02.805703   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:41:02.817624   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:41:02.827840   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:41:02.827902   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:41:02.836207   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:41:02.844201   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:41:02.844265   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:41:02.852667   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:41:02.860697   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:41:02.860741   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:41:02.869133   80180 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:41:02.877992   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:02.986350   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:00.823447   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:02.825375   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:00.578095   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:02.576899   81068 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.576927   81068 pod_ready.go:81] duration metric: took 10.506835962s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.576953   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hj7ss" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.584912   81068 pod_ready.go:92] pod "kube-proxy-hj7ss" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.584933   81068 pod_ready.go:81] duration metric: took 7.972079ms for pod "kube-proxy-hj7ss" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.584964   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.590342   81068 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.590366   81068 pod_ready.go:81] duration metric: took 5.392364ms for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.590380   81068 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:00.259461   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:00.759148   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.259596   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.759943   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.259670   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.759900   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.259745   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.759843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.259902   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.759850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.874112   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.091026   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.170734   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.292719   80180 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:41:04.292826   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.793710   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.292924   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.792872   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.293626   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.793632   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.810658   80180 api_server.go:72] duration metric: took 2.517938682s to wait for apiserver process to appear ...
	I0717 18:41:06.810685   80180 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:41:06.810705   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:05.323684   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:07.324653   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:04.596794   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:06.597411   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:09.097409   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:05.259624   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.759258   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.259346   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.759041   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.259467   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.759164   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.259047   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.759959   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.259372   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.759259   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.612683   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:41:09.612715   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:41:09.612728   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:09.633949   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:41:09.633975   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:41:09.811272   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:09.815690   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:09.815720   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:10.311256   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:10.319587   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:10.319620   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:10.811133   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:10.815819   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:10.815862   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:11.311037   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:11.315892   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:11.315923   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:11.811534   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:11.816601   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:11.816631   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:12.311178   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:12.315484   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:12.315510   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:12.811068   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:12.821016   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:12.821048   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:13.311166   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:13.315879   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:41:13.322661   80180 api_server.go:141] control plane version: v1.30.2
	I0717 18:41:13.322700   80180 api_server.go:131] duration metric: took 6.512007091s to wait for apiserver health ...
	I0717 18:41:13.322713   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:41:13.322722   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:41:13.324516   80180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:41:09.325535   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:11.325697   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:13.327238   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:11.597479   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:14.098908   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:10.259845   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:10.759671   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.259895   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.759877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.259003   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.759685   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.759844   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.259541   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.759709   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.325935   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:41:13.337601   80180 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:41:13.354366   80180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:41:13.364678   80180 system_pods.go:59] 8 kube-system pods found
	I0717 18:41:13.364715   80180 system_pods.go:61] "coredns-7db6d8ff4d-2fnlb" [86d50e9b-fb88-4332-90c5-a969b0654635] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:41:13.364726   80180 system_pods.go:61] "etcd-embed-certs-527415" [9d8ac0a8-4639-48d8-8ac4-88b0bd1e2082] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:41:13.364735   80180 system_pods.go:61] "kube-apiserver-embed-certs-527415" [7f72c4f9-f1db-4ac6-83e1-2b94245107c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:41:13.364743   80180 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [96081a97-2a90-4fec-84cb-9a399a43aeb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:41:13.364752   80180 system_pods.go:61] "kube-proxy-jltfs" [27f6259e-80cc-4881-bb06-6a2ad529179c] Running
	I0717 18:41:13.364763   80180 system_pods.go:61] "kube-scheduler-embed-certs-527415" [bed7b515-7ab0-460c-a13f-037f29576f30] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:41:13.364775   80180 system_pods.go:61] "metrics-server-569cc877fc-8md44" [1b9d50c8-6ca0-41c3-92d9-eebdccbf1a82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:41:13.364783   80180 system_pods.go:61] "storage-provisioner" [ccb34b69-d28d-477e-8c7a-0acdc547bec7] Running
	I0717 18:41:13.364791   80180 system_pods.go:74] duration metric: took 10.40947ms to wait for pod list to return data ...
	I0717 18:41:13.364803   80180 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:41:13.367687   80180 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:41:13.367712   80180 node_conditions.go:123] node cpu capacity is 2
	I0717 18:41:13.367725   80180 node_conditions.go:105] duration metric: took 2.912986ms to run NodePressure ...
	I0717 18:41:13.367745   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:13.630827   80180 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:41:13.636658   80180 kubeadm.go:739] kubelet initialised
	I0717 18:41:13.636688   80180 kubeadm.go:740] duration metric: took 5.830484ms waiting for restarted kubelet to initialise ...
	I0717 18:41:13.636699   80180 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:41:13.642171   80180 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.650539   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.650573   80180 pod_ready.go:81] duration metric: took 8.374432ms for pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.650585   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.650599   80180 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.655470   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "etcd-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.655500   80180 pod_ready.go:81] duration metric: took 4.8911ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.655512   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "etcd-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.655520   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.662448   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.662479   80180 pod_ready.go:81] duration metric: took 6.949002ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.662490   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.662499   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.757454   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.757485   80180 pod_ready.go:81] duration metric: took 94.976348ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.757494   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.757501   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:14.157339   80180 pod_ready.go:92] pod "kube-proxy-jltfs" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:14.157363   80180 pod_ready.go:81] duration metric: took 399.852649ms for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:14.157381   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:16.163623   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:15.825045   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:18.323440   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:16.596320   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:18.596807   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:15.259558   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:15.759585   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.259850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.760009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.259385   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.759208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.259218   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.759779   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.259666   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.759781   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.174371   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:20.664423   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:22.663932   80180 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:22.663955   80180 pod_ready.go:81] duration metric: took 8.506565077s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:22.663969   80180 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:20.324547   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:22.824318   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:21.096071   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:23.596775   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:20.259286   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:20.759048   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.259801   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.759595   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.259582   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.759871   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.259349   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.759659   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.259964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.759899   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.671105   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:27.170247   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:24.825017   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:26.825067   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:26.096196   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:28.097501   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:25.259559   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:25.759773   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.759924   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.259509   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.759986   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.259792   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.759564   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:29.259060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:29.259143   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:29.298974   80857 cri.go:89] found id: ""
	I0717 18:41:29.299006   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.299016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:29.299024   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:29.299087   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:29.333764   80857 cri.go:89] found id: ""
	I0717 18:41:29.333786   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.333793   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:29.333801   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:29.333849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:29.369639   80857 cri.go:89] found id: ""
	I0717 18:41:29.369674   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.369688   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:29.369697   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:29.369762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:29.403453   80857 cri.go:89] found id: ""
	I0717 18:41:29.403481   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.403489   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:29.403498   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:29.403555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:29.436662   80857 cri.go:89] found id: ""
	I0717 18:41:29.436687   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.436695   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:29.436701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:29.436749   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:29.471013   80857 cri.go:89] found id: ""
	I0717 18:41:29.471053   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.471064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:29.471074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:29.471139   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:29.502754   80857 cri.go:89] found id: ""
	I0717 18:41:29.502780   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.502787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:29.502793   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:29.502842   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:29.534205   80857 cri.go:89] found id: ""
	I0717 18:41:29.534232   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.534239   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:29.534247   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:29.534259   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:29.585406   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:29.585438   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:29.600629   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:29.600660   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:29.719788   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:29.719807   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:29.719819   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:29.785626   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:29.785662   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:29.669918   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:31.670544   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:29.325013   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:31.828532   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:30.097685   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:32.596760   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:32.325522   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:32.338046   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:32.338120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:32.370073   80857 cri.go:89] found id: ""
	I0717 18:41:32.370099   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.370106   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:32.370112   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:32.370165   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:32.408764   80857 cri.go:89] found id: ""
	I0717 18:41:32.408789   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.408799   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:32.408806   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:32.408862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:32.449078   80857 cri.go:89] found id: ""
	I0717 18:41:32.449108   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.449118   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:32.449125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:32.449176   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:32.481990   80857 cri.go:89] found id: ""
	I0717 18:41:32.482015   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.482022   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:32.482028   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:32.482077   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:32.521902   80857 cri.go:89] found id: ""
	I0717 18:41:32.521932   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.521942   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:32.521949   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:32.521997   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:32.554148   80857 cri.go:89] found id: ""
	I0717 18:41:32.554177   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.554206   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:32.554216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:32.554270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:32.587342   80857 cri.go:89] found id: ""
	I0717 18:41:32.587366   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.587374   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:32.587379   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:32.587425   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:32.619227   80857 cri.go:89] found id: ""
	I0717 18:41:32.619259   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.619270   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:32.619281   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:32.619296   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:32.669085   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:32.669124   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:32.682464   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:32.682500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:32.749218   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:32.749234   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:32.749245   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:32.814510   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:32.814545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:33.670578   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.670952   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:37.671373   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:34.324458   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:36.823615   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:38.825194   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.096041   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:37.096436   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:39.096906   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.362866   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:35.375563   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:35.375643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:35.412355   80857 cri.go:89] found id: ""
	I0717 18:41:35.412380   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.412388   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:35.412393   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:35.412439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:35.446596   80857 cri.go:89] found id: ""
	I0717 18:41:35.446621   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.446629   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:35.446634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:35.446691   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:35.481695   80857 cri.go:89] found id: ""
	I0717 18:41:35.481717   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.481725   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:35.481730   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:35.481783   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:35.514528   80857 cri.go:89] found id: ""
	I0717 18:41:35.514573   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.514584   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:35.514592   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:35.514657   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:35.547831   80857 cri.go:89] found id: ""
	I0717 18:41:35.547858   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.547871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:35.547879   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:35.547941   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:35.579059   80857 cri.go:89] found id: ""
	I0717 18:41:35.579084   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.579097   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:35.579104   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:35.579164   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:35.616442   80857 cri.go:89] found id: ""
	I0717 18:41:35.616480   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.616487   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:35.616492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:35.616545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:35.647535   80857 cri.go:89] found id: ""
	I0717 18:41:35.647564   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.647571   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:35.647579   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:35.647595   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:35.696664   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:35.696692   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:35.710474   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:35.710499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:35.785569   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:35.785595   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:35.785611   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:35.865750   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:35.865785   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:38.405391   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:38.417737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:38.417806   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:38.453848   80857 cri.go:89] found id: ""
	I0717 18:41:38.453877   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.453888   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:38.453895   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:38.453949   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:38.487083   80857 cri.go:89] found id: ""
	I0717 18:41:38.487112   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.487122   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:38.487129   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:38.487190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:38.517700   80857 cri.go:89] found id: ""
	I0717 18:41:38.517729   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.517738   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:38.517746   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:38.517808   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:38.547587   80857 cri.go:89] found id: ""
	I0717 18:41:38.547616   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.547625   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:38.547632   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:38.547780   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:38.581511   80857 cri.go:89] found id: ""
	I0717 18:41:38.581535   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.581542   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:38.581548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:38.581675   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:38.618308   80857 cri.go:89] found id: ""
	I0717 18:41:38.618327   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.618334   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:38.618340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:38.618401   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:38.658237   80857 cri.go:89] found id: ""
	I0717 18:41:38.658267   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.658278   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:38.658298   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:38.658359   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:38.694044   80857 cri.go:89] found id: ""
	I0717 18:41:38.694071   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.694080   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:38.694090   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:38.694106   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:38.746621   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:38.746658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:38.758781   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:38.758805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:38.827327   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:38.827345   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:38.827357   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:38.899731   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:38.899762   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:40.170106   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:42.170391   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:40.825940   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:43.327489   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:41.097668   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:43.597625   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:41.437479   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:41.451264   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:41.451336   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:41.489053   80857 cri.go:89] found id: ""
	I0717 18:41:41.489083   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.489093   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:41.489101   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:41.489162   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:41.521954   80857 cri.go:89] found id: ""
	I0717 18:41:41.521985   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.521996   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:41.522003   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:41.522068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:41.556847   80857 cri.go:89] found id: ""
	I0717 18:41:41.556875   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.556884   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:41.556893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:41.557024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:41.591232   80857 cri.go:89] found id: ""
	I0717 18:41:41.591255   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.591263   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:41.591269   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:41.591315   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:41.624533   80857 cri.go:89] found id: ""
	I0717 18:41:41.624565   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.624576   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:41.624583   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:41.624644   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:41.656033   80857 cri.go:89] found id: ""
	I0717 18:41:41.656063   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.656073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:41.656080   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:41.656140   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:41.691686   80857 cri.go:89] found id: ""
	I0717 18:41:41.691715   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.691725   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:41.691732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:41.691789   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:41.724688   80857 cri.go:89] found id: ""
	I0717 18:41:41.724718   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.724729   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:41.724741   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:41.724760   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:41.802855   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:41.802882   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:41.839242   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:41.839271   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:41.889028   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:41.889058   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:41.901598   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:41.901627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:41.972632   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.472824   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:44.487673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:44.487745   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:44.530173   80857 cri.go:89] found id: ""
	I0717 18:41:44.530204   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.530216   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:44.530224   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:44.530288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:44.577865   80857 cri.go:89] found id: ""
	I0717 18:41:44.577891   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.577899   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:44.577905   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:44.577967   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:44.621528   80857 cri.go:89] found id: ""
	I0717 18:41:44.621551   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.621559   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:44.621564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:44.621622   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:44.655456   80857 cri.go:89] found id: ""
	I0717 18:41:44.655488   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.655498   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:44.655505   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:44.655570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:44.688729   80857 cri.go:89] found id: ""
	I0717 18:41:44.688757   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.688767   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:44.688774   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:44.688832   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:44.720190   80857 cri.go:89] found id: ""
	I0717 18:41:44.720220   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.720231   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:44.720238   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:44.720294   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:44.750109   80857 cri.go:89] found id: ""
	I0717 18:41:44.750135   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.750142   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:44.750147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:44.750203   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:44.780039   80857 cri.go:89] found id: ""
	I0717 18:41:44.780066   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.780090   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:44.780098   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:44.780111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:44.829641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:44.829675   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:44.842587   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:44.842616   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:44.906331   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.906355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:44.906369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:44.983364   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:44.983400   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:44.671557   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:47.170565   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:45.827780   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:48.324627   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:46.096988   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:48.596469   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:47.525057   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:47.538586   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:47.538639   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:47.574805   80857 cri.go:89] found id: ""
	I0717 18:41:47.574832   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.574843   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:47.574849   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:47.574906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:47.609576   80857 cri.go:89] found id: ""
	I0717 18:41:47.609603   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.609611   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:47.609617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:47.609662   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:47.643899   80857 cri.go:89] found id: ""
	I0717 18:41:47.643927   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.643936   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:47.643941   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:47.643990   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:47.680365   80857 cri.go:89] found id: ""
	I0717 18:41:47.680404   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.680412   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:47.680418   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:47.680475   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:47.719038   80857 cri.go:89] found id: ""
	I0717 18:41:47.719061   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.719069   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:47.719074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:47.719118   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:47.751708   80857 cri.go:89] found id: ""
	I0717 18:41:47.751735   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.751744   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:47.751750   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:47.751807   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:47.789803   80857 cri.go:89] found id: ""
	I0717 18:41:47.789838   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.789850   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:47.789858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:47.789921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:47.821450   80857 cri.go:89] found id: ""
	I0717 18:41:47.821477   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.821487   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:47.821496   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:47.821511   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:47.886501   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:47.886526   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:47.886544   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:47.960142   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:47.960177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:47.995012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:47.995046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:48.046848   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:48.046884   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:49.670208   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:52.169471   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.824876   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:53.324628   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.597215   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:53.096114   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.560990   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:50.574906   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:50.575051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:50.607647   80857 cri.go:89] found id: ""
	I0717 18:41:50.607674   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.607687   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:50.607696   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:50.607756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:50.640621   80857 cri.go:89] found id: ""
	I0717 18:41:50.640651   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.640660   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:50.640667   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:50.640741   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:50.675269   80857 cri.go:89] found id: ""
	I0717 18:41:50.675293   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.675303   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:50.675313   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:50.675369   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:50.707915   80857 cri.go:89] found id: ""
	I0717 18:41:50.707938   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.707946   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:50.707951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:50.708006   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:50.741149   80857 cri.go:89] found id: ""
	I0717 18:41:50.741170   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.741178   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:50.741184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:50.741288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:50.772768   80857 cri.go:89] found id: ""
	I0717 18:41:50.772792   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.772799   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:50.772804   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:50.772854   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:50.804996   80857 cri.go:89] found id: ""
	I0717 18:41:50.805018   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.805028   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:50.805035   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:50.805094   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:50.838933   80857 cri.go:89] found id: ""
	I0717 18:41:50.838960   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.838971   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:50.838982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:50.838997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:50.886415   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:50.886444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:50.899024   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:50.899049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:50.965388   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:50.965416   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:50.965434   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:51.044449   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:51.044490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.580749   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:53.593759   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:53.593841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:53.626541   80857 cri.go:89] found id: ""
	I0717 18:41:53.626573   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.626582   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:53.626588   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:53.626645   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:53.658492   80857 cri.go:89] found id: ""
	I0717 18:41:53.658520   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.658529   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:53.658537   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:53.658600   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:53.694546   80857 cri.go:89] found id: ""
	I0717 18:41:53.694582   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.694590   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:53.694595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:53.694650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:53.727028   80857 cri.go:89] found id: ""
	I0717 18:41:53.727053   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.727061   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:53.727067   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:53.727129   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:53.762869   80857 cri.go:89] found id: ""
	I0717 18:41:53.762897   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.762906   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:53.762913   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:53.762976   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:53.794133   80857 cri.go:89] found id: ""
	I0717 18:41:53.794158   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.794166   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:53.794172   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:53.794225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:53.828432   80857 cri.go:89] found id: ""
	I0717 18:41:53.828463   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.828473   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:53.828484   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:53.828546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:53.863316   80857 cri.go:89] found id: ""
	I0717 18:41:53.863345   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.863353   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:53.863362   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:53.863384   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.897353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:53.897380   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:53.944213   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:53.944242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:53.957484   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:53.957509   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:54.025962   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:54.025992   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:54.026006   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:54.170642   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:56.672407   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:55.325017   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:57.823877   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:55.596492   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:58.096397   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:56.609502   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:56.621849   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:56.621913   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:56.657469   80857 cri.go:89] found id: ""
	I0717 18:41:56.657498   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.657510   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:56.657517   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:56.657579   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:56.691298   80857 cri.go:89] found id: ""
	I0717 18:41:56.691320   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.691327   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:56.691332   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:56.691386   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:56.723305   80857 cri.go:89] found id: ""
	I0717 18:41:56.723334   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.723344   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:56.723352   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:56.723417   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:56.755893   80857 cri.go:89] found id: ""
	I0717 18:41:56.755918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.755926   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:56.755931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:56.755982   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:56.787777   80857 cri.go:89] found id: ""
	I0717 18:41:56.787807   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.787819   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:56.787828   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:56.787894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:56.821126   80857 cri.go:89] found id: ""
	I0717 18:41:56.821152   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.821163   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:56.821170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:56.821228   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:56.855894   80857 cri.go:89] found id: ""
	I0717 18:41:56.855918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.855926   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:56.855931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:56.855980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:56.893483   80857 cri.go:89] found id: ""
	I0717 18:41:56.893505   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.893512   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:56.893521   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:56.893532   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:56.945355   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:56.945385   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:56.958426   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:56.958451   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:57.025542   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:57.025571   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:57.025585   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:57.100497   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:57.100528   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:59.636400   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:59.648517   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:59.648571   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:59.683954   80857 cri.go:89] found id: ""
	I0717 18:41:59.683978   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.683988   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:59.683995   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:59.684065   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:59.719135   80857 cri.go:89] found id: ""
	I0717 18:41:59.719162   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.719172   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:59.719179   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:59.719243   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:59.755980   80857 cri.go:89] found id: ""
	I0717 18:41:59.756012   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.756023   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:59.756030   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:59.756091   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:59.788147   80857 cri.go:89] found id: ""
	I0717 18:41:59.788176   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.788185   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:59.788191   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:59.788239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:59.819646   80857 cri.go:89] found id: ""
	I0717 18:41:59.819670   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.819679   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:59.819685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:59.819738   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:59.852487   80857 cri.go:89] found id: ""
	I0717 18:41:59.852508   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.852516   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:59.852521   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:59.852586   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:59.883761   80857 cri.go:89] found id: ""
	I0717 18:41:59.883794   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.883805   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:59.883812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:59.883870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:59.914854   80857 cri.go:89] found id: ""
	I0717 18:41:59.914882   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.914889   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:59.914896   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:59.914909   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:59.995619   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:59.995650   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:00.034444   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:00.034472   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:59.172253   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:01.670422   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:59.824347   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:01.824444   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:03.826580   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:00.096457   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:02.596587   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:00.084278   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:00.084308   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:00.097771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:00.097796   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:00.161753   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:02.662134   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:02.676200   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:02.676277   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:02.711606   80857 cri.go:89] found id: ""
	I0717 18:42:02.711640   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.711652   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:02.711659   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:02.711711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:02.744704   80857 cri.go:89] found id: ""
	I0717 18:42:02.744728   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.744735   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:02.744741   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:02.744800   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:02.778815   80857 cri.go:89] found id: ""
	I0717 18:42:02.778846   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.778859   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:02.778868   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:02.778936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:02.810896   80857 cri.go:89] found id: ""
	I0717 18:42:02.810928   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.810941   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:02.810950   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:02.811024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:02.843868   80857 cri.go:89] found id: ""
	I0717 18:42:02.843892   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.843903   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:02.843910   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:02.843972   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:02.876311   80857 cri.go:89] found id: ""
	I0717 18:42:02.876338   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.876348   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:02.876356   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:02.876420   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:02.910752   80857 cri.go:89] found id: ""
	I0717 18:42:02.910776   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.910784   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:02.910789   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:02.910835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:02.947286   80857 cri.go:89] found id: ""
	I0717 18:42:02.947318   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.947328   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:02.947337   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:02.947351   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:02.999512   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:02.999542   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:03.014063   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:03.014094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:03.081822   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:03.081844   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:03.081858   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:03.161088   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:03.161117   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:04.171168   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:06.669508   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:06.324608   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:08.825084   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:04.597129   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:07.098716   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:05.699198   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:05.711597   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:05.711654   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:05.749653   80857 cri.go:89] found id: ""
	I0717 18:42:05.749684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.749694   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:05.749703   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:05.749757   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:05.785095   80857 cri.go:89] found id: ""
	I0717 18:42:05.785118   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.785125   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:05.785134   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:05.785179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:05.818085   80857 cri.go:89] found id: ""
	I0717 18:42:05.818111   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.818119   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:05.818125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:05.818171   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:05.851872   80857 cri.go:89] found id: ""
	I0717 18:42:05.851895   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.851902   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:05.851907   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:05.851958   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:05.883924   80857 cri.go:89] found id: ""
	I0717 18:42:05.883948   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.883958   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:05.883965   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:05.884025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:05.916365   80857 cri.go:89] found id: ""
	I0717 18:42:05.916396   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.916407   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:05.916414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:05.916473   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:05.950656   80857 cri.go:89] found id: ""
	I0717 18:42:05.950684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.950695   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:05.950701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:05.950762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:05.992132   80857 cri.go:89] found id: ""
	I0717 18:42:05.992160   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.992169   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:05.992177   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:05.992190   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:06.042162   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:06.042192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:06.055594   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:06.055619   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:06.123007   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:06.123038   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:06.123068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:06.200429   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:06.200460   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:08.739039   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:08.751520   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:08.751575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:08.783765   80857 cri.go:89] found id: ""
	I0717 18:42:08.783794   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.783805   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:08.783812   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:08.783864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:08.815200   80857 cri.go:89] found id: ""
	I0717 18:42:08.815227   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.815236   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:08.815242   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:08.815289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:08.848970   80857 cri.go:89] found id: ""
	I0717 18:42:08.849002   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.849012   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:08.849021   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:08.849084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:08.881832   80857 cri.go:89] found id: ""
	I0717 18:42:08.881859   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.881866   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:08.881874   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:08.881922   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:08.913119   80857 cri.go:89] found id: ""
	I0717 18:42:08.913142   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.913149   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:08.913155   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:08.913201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:08.947471   80857 cri.go:89] found id: ""
	I0717 18:42:08.947499   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.947509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:08.947515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:08.947570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:08.979570   80857 cri.go:89] found id: ""
	I0717 18:42:08.979599   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.979609   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:08.979615   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:08.979670   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:09.012960   80857 cri.go:89] found id: ""
	I0717 18:42:09.012991   80857 logs.go:276] 0 containers: []
	W0717 18:42:09.013002   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:09.013012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:09.013027   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:09.065732   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:09.065769   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:09.079572   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:09.079602   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:09.151737   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:09.151754   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:09.151766   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:09.230185   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:09.230218   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:08.670185   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:10.671336   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.325340   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:13.824087   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:09.595757   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.596784   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:14.096765   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.767189   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:11.780044   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:11.780115   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:11.812700   80857 cri.go:89] found id: ""
	I0717 18:42:11.812722   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.812730   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:11.812736   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:11.812781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:11.846855   80857 cri.go:89] found id: ""
	I0717 18:42:11.846883   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.846893   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:11.846900   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:11.846962   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:11.877671   80857 cri.go:89] found id: ""
	I0717 18:42:11.877700   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.877710   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:11.877716   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:11.877767   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:11.908703   80857 cri.go:89] found id: ""
	I0717 18:42:11.908728   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.908735   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:11.908740   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:11.908786   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:11.942191   80857 cri.go:89] found id: ""
	I0717 18:42:11.942218   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.942225   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:11.942231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:11.942284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:11.974751   80857 cri.go:89] found id: ""
	I0717 18:42:11.974782   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.974798   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:11.974807   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:11.974876   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:12.006287   80857 cri.go:89] found id: ""
	I0717 18:42:12.006317   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.006327   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:12.006335   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:12.006396   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:12.036524   80857 cri.go:89] found id: ""
	I0717 18:42:12.036546   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.036554   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:12.036575   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:12.036599   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:12.085073   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:12.085109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:12.098908   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:12.098937   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:12.161665   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:12.161687   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:12.161702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:12.240349   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:12.240401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:14.781101   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:14.794081   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:14.794149   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:14.828975   80857 cri.go:89] found id: ""
	I0717 18:42:14.829003   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.829013   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:14.829021   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:14.829072   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:14.864858   80857 cri.go:89] found id: ""
	I0717 18:42:14.864886   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.864896   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:14.864903   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:14.864986   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:14.897961   80857 cri.go:89] found id: ""
	I0717 18:42:14.897983   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.897991   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:14.897996   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:14.898041   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:14.935499   80857 cri.go:89] found id: ""
	I0717 18:42:14.935521   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.935529   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:14.935534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:14.935591   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:14.967581   80857 cri.go:89] found id: ""
	I0717 18:42:14.967605   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.967621   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:14.967629   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:14.967688   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:15.001844   80857 cri.go:89] found id: ""
	I0717 18:42:15.001876   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.001888   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:15.001894   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:15.001942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:15.038940   80857 cri.go:89] found id: ""
	I0717 18:42:15.038967   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.038977   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:15.038985   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:15.039043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:13.170111   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:15.669712   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:17.669916   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:16.325511   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:18.823820   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:16.597587   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:19.096905   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:15.072636   80857 cri.go:89] found id: ""
	I0717 18:42:15.072665   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.072677   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:15.072688   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:15.072703   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:15.124889   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:15.124934   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:15.138661   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:15.138691   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:15.208762   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:15.208791   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:15.208806   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:15.281302   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:15.281336   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:17.817136   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:17.831013   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:17.831078   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:17.867065   80857 cri.go:89] found id: ""
	I0717 18:42:17.867091   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.867101   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:17.867108   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:17.867166   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:17.904143   80857 cri.go:89] found id: ""
	I0717 18:42:17.904171   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.904180   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:17.904188   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:17.904248   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:17.937450   80857 cri.go:89] found id: ""
	I0717 18:42:17.937478   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.937487   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:17.937492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:17.937556   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:17.970650   80857 cri.go:89] found id: ""
	I0717 18:42:17.970679   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.970689   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:17.970696   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:17.970754   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:18.002329   80857 cri.go:89] found id: ""
	I0717 18:42:18.002355   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.002364   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:18.002371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:18.002430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:18.035253   80857 cri.go:89] found id: ""
	I0717 18:42:18.035278   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.035288   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:18.035295   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:18.035356   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:18.070386   80857 cri.go:89] found id: ""
	I0717 18:42:18.070419   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.070431   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:18.070439   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:18.070507   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:18.106148   80857 cri.go:89] found id: ""
	I0717 18:42:18.106170   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.106177   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:18.106185   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:18.106201   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:18.157359   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:18.157390   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:18.171757   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:18.171782   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:18.242795   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:18.242818   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:18.242831   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:18.316221   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:18.316255   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:19.670562   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:22.171111   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:20.824266   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:22.824366   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:21.596773   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:24.098051   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:20.857953   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:20.870813   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:20.870882   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:20.906033   80857 cri.go:89] found id: ""
	I0717 18:42:20.906065   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.906075   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:20.906083   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:20.906142   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:20.942292   80857 cri.go:89] found id: ""
	I0717 18:42:20.942316   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.942335   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:20.942342   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:20.942403   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:20.985113   80857 cri.go:89] found id: ""
	I0717 18:42:20.985143   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.985151   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:20.985157   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:20.985217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:21.021807   80857 cri.go:89] found id: ""
	I0717 18:42:21.021834   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.021842   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:21.021847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:21.021906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:21.061924   80857 cri.go:89] found id: ""
	I0717 18:42:21.061949   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.061961   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:21.061969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:21.062025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:21.098890   80857 cri.go:89] found id: ""
	I0717 18:42:21.098916   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.098927   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:21.098935   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:21.098991   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:21.132576   80857 cri.go:89] found id: ""
	I0717 18:42:21.132612   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.132621   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:21.132627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:21.132687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:21.167723   80857 cri.go:89] found id: ""
	I0717 18:42:21.167765   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.167778   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:21.167788   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:21.167803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:21.220427   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:21.220461   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:21.233191   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:21.233216   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:21.304462   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:21.304481   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:21.304498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:21.386887   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:21.386925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:23.926518   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:23.940470   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:23.940534   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:23.976739   80857 cri.go:89] found id: ""
	I0717 18:42:23.976763   80857 logs.go:276] 0 containers: []
	W0717 18:42:23.976773   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:23.976778   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:23.976838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:24.007575   80857 cri.go:89] found id: ""
	I0717 18:42:24.007603   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.007612   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:24.007617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:24.007671   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:24.040430   80857 cri.go:89] found id: ""
	I0717 18:42:24.040455   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.040463   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:24.040468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:24.040581   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:24.071602   80857 cri.go:89] found id: ""
	I0717 18:42:24.071629   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.071638   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:24.071644   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:24.071705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:24.109570   80857 cri.go:89] found id: ""
	I0717 18:42:24.109595   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.109602   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:24.109607   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:24.109667   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:24.144284   80857 cri.go:89] found id: ""
	I0717 18:42:24.144305   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.144328   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:24.144333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:24.144382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:24.179441   80857 cri.go:89] found id: ""
	I0717 18:42:24.179467   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.179474   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:24.179479   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:24.179545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:24.222100   80857 cri.go:89] found id: ""
	I0717 18:42:24.222133   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.222143   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:24.222159   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:24.222175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:24.273181   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:24.273215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:24.285835   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:24.285861   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:24.357804   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:24.357826   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:24.357839   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:24.437270   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:24.437310   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:24.670033   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.671014   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:24.824543   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:27.325296   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.597795   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:29.098055   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.979543   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:26.992443   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:26.992497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:27.025520   80857 cri.go:89] found id: ""
	I0717 18:42:27.025548   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.025560   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:27.025567   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:27.025630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:27.059971   80857 cri.go:89] found id: ""
	I0717 18:42:27.060002   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.060011   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:27.060016   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:27.060068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:27.091370   80857 cri.go:89] found id: ""
	I0717 18:42:27.091397   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.091407   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:27.091415   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:27.091468   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:27.123736   80857 cri.go:89] found id: ""
	I0717 18:42:27.123768   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.123779   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:27.123786   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:27.123849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:27.156155   80857 cri.go:89] found id: ""
	I0717 18:42:27.156177   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.156185   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:27.156190   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:27.156239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:27.190701   80857 cri.go:89] found id: ""
	I0717 18:42:27.190729   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.190741   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:27.190749   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:27.190825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:27.222093   80857 cri.go:89] found id: ""
	I0717 18:42:27.222119   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.222130   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:27.222137   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:27.222199   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:27.258789   80857 cri.go:89] found id: ""
	I0717 18:42:27.258813   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.258824   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:27.258834   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:27.258848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:27.307033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:27.307068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:27.321181   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:27.321209   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:27.390560   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:27.390593   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:27.390613   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:27.464352   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:27.464389   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:30.005732   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:30.019088   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:30.019160   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:29.170578   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.670221   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:29.327610   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.824292   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:33.824392   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.595937   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:33.597622   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:30.052733   80857 cri.go:89] found id: ""
	I0717 18:42:30.052757   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.052765   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:30.052775   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:30.052836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:30.087683   80857 cri.go:89] found id: ""
	I0717 18:42:30.087711   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.087722   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:30.087729   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:30.087774   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:30.124371   80857 cri.go:89] found id: ""
	I0717 18:42:30.124404   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.124416   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:30.124432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:30.124487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:30.160081   80857 cri.go:89] found id: ""
	I0717 18:42:30.160107   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.160115   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:30.160122   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:30.160173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:30.194420   80857 cri.go:89] found id: ""
	I0717 18:42:30.194447   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.194456   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:30.194464   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:30.194522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:30.229544   80857 cri.go:89] found id: ""
	I0717 18:42:30.229570   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.229584   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:30.229591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:30.229650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:30.264164   80857 cri.go:89] found id: ""
	I0717 18:42:30.264193   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.264204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:30.264211   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:30.264266   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:30.296958   80857 cri.go:89] found id: ""
	I0717 18:42:30.296986   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.296996   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:30.297008   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:30.297049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:30.348116   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:30.348145   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:30.361373   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:30.361401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:30.429601   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:30.429620   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:30.429634   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:30.507718   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:30.507752   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:33.045539   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:33.058149   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:33.058219   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:33.088675   80857 cri.go:89] found id: ""
	I0717 18:42:33.088702   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.088710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:33.088717   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:33.088773   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:33.121269   80857 cri.go:89] found id: ""
	I0717 18:42:33.121297   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.121308   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:33.121315   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:33.121375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:33.156144   80857 cri.go:89] found id: ""
	I0717 18:42:33.156173   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.156184   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:33.156192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:33.156257   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:33.188559   80857 cri.go:89] found id: ""
	I0717 18:42:33.188585   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.188597   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:33.188603   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:33.188651   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:33.219650   80857 cri.go:89] found id: ""
	I0717 18:42:33.219672   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.219680   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:33.219686   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:33.219746   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:33.249704   80857 cri.go:89] found id: ""
	I0717 18:42:33.249728   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.249737   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:33.249742   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:33.249793   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:33.283480   80857 cri.go:89] found id: ""
	I0717 18:42:33.283503   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.283511   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:33.283516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:33.283560   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:33.314577   80857 cri.go:89] found id: ""
	I0717 18:42:33.314620   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.314629   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:33.314638   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:33.314649   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:33.363458   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:33.363491   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:33.377240   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:33.377267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:33.442939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:33.442961   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:33.442976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:33.522422   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:33.522456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:34.170638   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.171034   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.324780   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:38.824832   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.097788   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:38.596054   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.063823   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:36.078272   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:36.078342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:36.111460   80857 cri.go:89] found id: ""
	I0717 18:42:36.111494   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.111502   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:36.111509   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:36.111562   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:36.144191   80857 cri.go:89] found id: ""
	I0717 18:42:36.144222   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.144232   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:36.144239   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:36.144306   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:36.177247   80857 cri.go:89] found id: ""
	I0717 18:42:36.177277   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.177288   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:36.177294   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:36.177350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:36.213390   80857 cri.go:89] found id: ""
	I0717 18:42:36.213419   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.213427   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:36.213433   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:36.213493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:36.246775   80857 cri.go:89] found id: ""
	I0717 18:42:36.246799   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.246807   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:36.246812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:36.246870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:36.282441   80857 cri.go:89] found id: ""
	I0717 18:42:36.282463   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.282470   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:36.282476   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:36.282529   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:36.314178   80857 cri.go:89] found id: ""
	I0717 18:42:36.314203   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.314211   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:36.314216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:36.314265   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:36.353705   80857 cri.go:89] found id: ""
	I0717 18:42:36.353730   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.353737   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:36.353746   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:36.353758   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:36.370866   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:36.370894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:36.463660   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:36.463693   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:36.463710   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:36.540337   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:36.540371   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:36.575770   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:36.575801   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.128675   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:39.141187   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:39.141255   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:39.175960   80857 cri.go:89] found id: ""
	I0717 18:42:39.175982   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.175989   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:39.175994   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:39.176051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:39.209442   80857 cri.go:89] found id: ""
	I0717 18:42:39.209472   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.209483   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:39.209490   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:39.209552   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:39.243225   80857 cri.go:89] found id: ""
	I0717 18:42:39.243249   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.243256   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:39.243262   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:39.243309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:39.277369   80857 cri.go:89] found id: ""
	I0717 18:42:39.277396   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.277407   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:39.277414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:39.277464   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:39.310522   80857 cri.go:89] found id: ""
	I0717 18:42:39.310552   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.310563   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:39.310570   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:39.310637   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:39.344186   80857 cri.go:89] found id: ""
	I0717 18:42:39.344208   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.344216   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:39.344221   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:39.344279   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:39.375329   80857 cri.go:89] found id: ""
	I0717 18:42:39.375354   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.375366   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:39.375372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:39.375419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:39.412629   80857 cri.go:89] found id: ""
	I0717 18:42:39.412659   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.412668   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:39.412679   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:39.412696   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:39.447607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:39.447644   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.498981   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:39.499013   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:39.512380   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:39.512409   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:39.580396   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:39.580415   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:39.580428   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:38.670213   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:41.170284   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:40.825257   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:43.324155   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:40.596267   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:42.597199   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:42.158145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:42.177450   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:42.177522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:42.222849   80857 cri.go:89] found id: ""
	I0717 18:42:42.222880   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.222890   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:42.222897   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:42.222954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:42.252712   80857 cri.go:89] found id: ""
	I0717 18:42:42.252742   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.252752   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:42.252757   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:42.252802   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:42.283764   80857 cri.go:89] found id: ""
	I0717 18:42:42.283789   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.283799   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:42.283806   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:42.283864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:42.317243   80857 cri.go:89] found id: ""
	I0717 18:42:42.317270   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.317281   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:42.317288   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:42.317350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:42.349972   80857 cri.go:89] found id: ""
	I0717 18:42:42.350000   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.350010   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:42.350017   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:42.350074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:42.382111   80857 cri.go:89] found id: ""
	I0717 18:42:42.382146   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.382158   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:42.382165   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:42.382223   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:42.414669   80857 cri.go:89] found id: ""
	I0717 18:42:42.414692   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.414700   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:42.414705   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:42.414765   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:42.446533   80857 cri.go:89] found id: ""
	I0717 18:42:42.446571   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.446579   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:42.446588   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:42.446603   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:42.522142   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:42.522165   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:42.522177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:42.602456   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:42.602493   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:42.642192   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:42.642221   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:42.695016   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:42.695046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:43.170955   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.670631   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.325626   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:47.824543   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.097244   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:47.097783   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.208310   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:45.221821   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:45.221901   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:45.256887   80857 cri.go:89] found id: ""
	I0717 18:42:45.256914   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.256924   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:45.256930   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:45.256999   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:45.293713   80857 cri.go:89] found id: ""
	I0717 18:42:45.293735   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.293748   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:45.293753   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:45.293799   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:45.328790   80857 cri.go:89] found id: ""
	I0717 18:42:45.328815   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.328824   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:45.328833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:45.328880   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:45.364977   80857 cri.go:89] found id: ""
	I0717 18:42:45.365004   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.365014   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:45.365022   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:45.365084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:45.401131   80857 cri.go:89] found id: ""
	I0717 18:42:45.401157   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.401164   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:45.401170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:45.401217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:45.432252   80857 cri.go:89] found id: ""
	I0717 18:42:45.432279   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.432287   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:45.432293   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:45.432338   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:45.464636   80857 cri.go:89] found id: ""
	I0717 18:42:45.464659   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.464667   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:45.464674   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:45.464728   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:45.494884   80857 cri.go:89] found id: ""
	I0717 18:42:45.494913   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.494924   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:45.494935   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:45.494949   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:45.546578   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:45.546610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:45.559622   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:45.559647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:45.622094   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:45.622114   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:45.622126   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:45.699772   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:45.699814   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.241667   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:48.254205   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:48.254270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:48.293258   80857 cri.go:89] found id: ""
	I0717 18:42:48.293287   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.293298   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:48.293305   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:48.293362   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:48.328778   80857 cri.go:89] found id: ""
	I0717 18:42:48.328807   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.328818   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:48.328824   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:48.328884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:48.360230   80857 cri.go:89] found id: ""
	I0717 18:42:48.360256   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.360266   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:48.360276   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:48.360335   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:48.397770   80857 cri.go:89] found id: ""
	I0717 18:42:48.397797   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.397808   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:48.397815   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:48.397873   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:48.430912   80857 cri.go:89] found id: ""
	I0717 18:42:48.430938   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.430946   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:48.430956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:48.431015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:48.462659   80857 cri.go:89] found id: ""
	I0717 18:42:48.462688   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.462699   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:48.462706   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:48.462771   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:48.497554   80857 cri.go:89] found id: ""
	I0717 18:42:48.497584   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.497594   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:48.497601   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:48.497665   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:48.529524   80857 cri.go:89] found id: ""
	I0717 18:42:48.529547   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.529555   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:48.529564   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:48.529577   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:48.601265   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:48.601285   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:48.601297   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:48.678045   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:48.678075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.718565   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:48.718598   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:48.769923   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:48.769956   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:48.169777   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:50.669643   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.670334   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:50.324997   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.824163   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:49.596927   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.097602   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:51.282887   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:51.295778   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:51.295848   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:51.329324   80857 cri.go:89] found id: ""
	I0717 18:42:51.329351   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.329361   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:51.329369   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:51.329434   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:51.362013   80857 cri.go:89] found id: ""
	I0717 18:42:51.362042   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.362052   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:51.362059   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:51.362120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:51.395039   80857 cri.go:89] found id: ""
	I0717 18:42:51.395069   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.395080   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:51.395087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:51.395155   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:51.427683   80857 cri.go:89] found id: ""
	I0717 18:42:51.427709   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.427717   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:51.427722   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:51.427772   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:51.461683   80857 cri.go:89] found id: ""
	I0717 18:42:51.461706   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.461718   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:51.461723   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:51.461769   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:51.495780   80857 cri.go:89] found id: ""
	I0717 18:42:51.495802   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.495810   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:51.495816   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:51.495867   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:51.527541   80857 cri.go:89] found id: ""
	I0717 18:42:51.527573   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.527583   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:51.527591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:51.527648   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:51.567947   80857 cri.go:89] found id: ""
	I0717 18:42:51.567975   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.567987   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:51.567997   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:51.568014   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:51.620083   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:51.620109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:51.632823   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:51.632848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:51.705731   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:51.705753   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:51.705767   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:51.781969   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:51.782005   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.318011   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:54.331886   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:54.331942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:54.362935   80857 cri.go:89] found id: ""
	I0717 18:42:54.362962   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.362972   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:54.362979   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:54.363032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:54.396153   80857 cri.go:89] found id: ""
	I0717 18:42:54.396180   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.396191   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:54.396198   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:54.396259   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:54.433123   80857 cri.go:89] found id: ""
	I0717 18:42:54.433150   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.433160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:54.433168   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:54.433224   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:54.465034   80857 cri.go:89] found id: ""
	I0717 18:42:54.465064   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.465079   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:54.465087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:54.465200   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:54.496200   80857 cri.go:89] found id: ""
	I0717 18:42:54.496250   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.496263   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:54.496271   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:54.496332   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:54.528618   80857 cri.go:89] found id: ""
	I0717 18:42:54.528646   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.528656   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:54.528664   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:54.528724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:54.563018   80857 cri.go:89] found id: ""
	I0717 18:42:54.563042   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.563052   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:54.563059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:54.563114   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:54.595221   80857 cri.go:89] found id: ""
	I0717 18:42:54.595256   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.595266   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:54.595275   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:54.595291   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:54.608193   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:54.608220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:54.673755   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:54.673778   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:54.673793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:54.756443   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:54.756483   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.792670   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:54.792700   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:55.169224   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.169851   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:54.824614   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.324611   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:54.596824   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:56.597638   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:59.096992   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.344637   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:57.357003   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:57.357068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:57.389230   80857 cri.go:89] found id: ""
	I0717 18:42:57.389261   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.389271   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:57.389278   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:57.389372   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:57.421529   80857 cri.go:89] found id: ""
	I0717 18:42:57.421553   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.421571   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:57.421578   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:57.421642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:57.455154   80857 cri.go:89] found id: ""
	I0717 18:42:57.455186   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.455193   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:57.455199   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:57.455245   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:57.490576   80857 cri.go:89] found id: ""
	I0717 18:42:57.490608   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.490621   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:57.490630   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:57.490693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:57.523972   80857 cri.go:89] found id: ""
	I0717 18:42:57.524010   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.524023   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:57.524033   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:57.524092   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:57.558106   80857 cri.go:89] found id: ""
	I0717 18:42:57.558132   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.558140   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:57.558145   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:57.558201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:57.591009   80857 cri.go:89] found id: ""
	I0717 18:42:57.591035   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.591045   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:57.591051   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:57.591110   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:57.624564   80857 cri.go:89] found id: ""
	I0717 18:42:57.624592   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.624601   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:57.624612   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:57.624627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:57.699833   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:57.699868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:57.737029   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:57.737066   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:57.790562   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:57.790605   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:57.804935   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:57.804984   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:57.873081   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:59.170203   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.170348   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:59.325020   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.824876   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:03.825020   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.596885   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:03.597698   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:00.374166   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:00.388370   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:00.388443   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:00.421228   80857 cri.go:89] found id: ""
	I0717 18:43:00.421257   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.421268   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:00.421276   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:00.421325   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:00.451819   80857 cri.go:89] found id: ""
	I0717 18:43:00.451846   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.451856   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:00.451862   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:00.451917   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:00.482960   80857 cri.go:89] found id: ""
	I0717 18:43:00.482993   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.483004   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:00.483015   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:00.483074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:00.515860   80857 cri.go:89] found id: ""
	I0717 18:43:00.515882   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.515892   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:00.515899   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:00.515954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:00.548177   80857 cri.go:89] found id: ""
	I0717 18:43:00.548202   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.548212   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:00.548217   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:00.548275   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:00.580759   80857 cri.go:89] found id: ""
	I0717 18:43:00.580782   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.580790   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:00.580795   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:00.580847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:00.618661   80857 cri.go:89] found id: ""
	I0717 18:43:00.618683   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.618691   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:00.618699   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:00.618742   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:00.650503   80857 cri.go:89] found id: ""
	I0717 18:43:00.650528   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.650535   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:00.650544   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:00.650555   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:00.699668   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:00.699697   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:00.714086   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:00.714114   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:00.777051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:00.777087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:00.777105   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:00.859238   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:00.859274   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:03.399050   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:03.412565   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:03.412626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:03.445993   80857 cri.go:89] found id: ""
	I0717 18:43:03.446026   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.446038   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:03.446045   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:03.446101   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:03.481251   80857 cri.go:89] found id: ""
	I0717 18:43:03.481285   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.481297   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:03.481305   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:03.481371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:03.514406   80857 cri.go:89] found id: ""
	I0717 18:43:03.514433   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.514441   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:03.514447   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:03.514497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:03.546217   80857 cri.go:89] found id: ""
	I0717 18:43:03.546248   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.546258   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:03.546266   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:03.546327   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:03.577287   80857 cri.go:89] found id: ""
	I0717 18:43:03.577318   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.577333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:03.577340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:03.577394   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:03.610080   80857 cri.go:89] found id: ""
	I0717 18:43:03.610101   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.610109   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:03.610114   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:03.610159   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:03.643753   80857 cri.go:89] found id: ""
	I0717 18:43:03.643777   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.643787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:03.643792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:03.643849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:03.676290   80857 cri.go:89] found id: ""
	I0717 18:43:03.676338   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.676345   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:03.676353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:03.676364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:03.727818   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:03.727850   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:03.740752   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:03.740784   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:03.810465   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:03.810485   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:03.810499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:03.889326   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:03.889359   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:03.170473   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:05.170754   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:07.172145   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.323855   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:08.325019   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.096213   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:08.096443   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.426949   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:06.440007   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:06.440079   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:06.471689   80857 cri.go:89] found id: ""
	I0717 18:43:06.471715   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.471724   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:06.471729   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:06.471775   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:06.503818   80857 cri.go:89] found id: ""
	I0717 18:43:06.503840   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.503847   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:06.503853   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:06.503900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:06.534733   80857 cri.go:89] found id: ""
	I0717 18:43:06.534755   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.534763   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:06.534768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:06.534818   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:06.565388   80857 cri.go:89] found id: ""
	I0717 18:43:06.565414   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.565421   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:06.565431   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:06.565480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:06.597739   80857 cri.go:89] found id: ""
	I0717 18:43:06.597764   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.597775   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:06.597782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:06.597847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:06.629823   80857 cri.go:89] found id: ""
	I0717 18:43:06.629845   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.629853   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:06.629859   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:06.629921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:06.663753   80857 cri.go:89] found id: ""
	I0717 18:43:06.663779   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.663787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:06.663792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:06.663838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:06.700868   80857 cri.go:89] found id: ""
	I0717 18:43:06.700896   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.700906   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:06.700917   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:06.700932   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:06.753064   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:06.753097   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:06.765845   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:06.765868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:06.834691   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:06.834715   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:06.834729   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:06.908650   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:06.908682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:09.450804   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:09.463369   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:09.463452   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:09.506992   80857 cri.go:89] found id: ""
	I0717 18:43:09.507020   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.507028   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:09.507035   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:09.507093   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:09.543083   80857 cri.go:89] found id: ""
	I0717 18:43:09.543108   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.543116   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:09.543121   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:09.543174   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:09.576194   80857 cri.go:89] found id: ""
	I0717 18:43:09.576219   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.576226   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:09.576231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:09.576289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:09.610148   80857 cri.go:89] found id: ""
	I0717 18:43:09.610171   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.610178   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:09.610184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:09.610258   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:09.642217   80857 cri.go:89] found id: ""
	I0717 18:43:09.642246   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.642255   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:09.642263   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:09.642342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:09.678041   80857 cri.go:89] found id: ""
	I0717 18:43:09.678064   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.678073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:09.678079   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:09.678141   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:09.711162   80857 cri.go:89] found id: ""
	I0717 18:43:09.711193   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.711204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:09.711212   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:09.711272   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:09.746135   80857 cri.go:89] found id: ""
	I0717 18:43:09.746164   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.746175   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:09.746186   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:09.746197   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:09.799268   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:09.799303   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:09.811910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:09.811935   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:09.876939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:09.876982   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:09.876998   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:09.951468   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:09.951502   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:09.671086   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.170273   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:10.823628   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.824485   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:10.597216   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:13.096347   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.488926   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:12.501054   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:12.501112   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:12.532536   80857 cri.go:89] found id: ""
	I0717 18:43:12.532569   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.532577   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:12.532582   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:12.532629   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:12.565102   80857 cri.go:89] found id: ""
	I0717 18:43:12.565130   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.565141   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:12.565148   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:12.565208   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:12.600262   80857 cri.go:89] found id: ""
	I0717 18:43:12.600299   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.600309   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:12.600316   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:12.600366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:12.633950   80857 cri.go:89] found id: ""
	I0717 18:43:12.633980   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.633991   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:12.633998   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:12.634054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:12.673297   80857 cri.go:89] found id: ""
	I0717 18:43:12.673325   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.673338   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:12.673345   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:12.673406   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:12.707112   80857 cri.go:89] found id: ""
	I0717 18:43:12.707136   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.707144   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:12.707150   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:12.707206   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:12.746323   80857 cri.go:89] found id: ""
	I0717 18:43:12.746348   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.746358   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:12.746372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:12.746433   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:12.779470   80857 cri.go:89] found id: ""
	I0717 18:43:12.779496   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.779507   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:12.779518   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:12.779534   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:12.830156   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:12.830178   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:12.843707   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:12.843734   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:12.911849   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:12.911875   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:12.911891   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:12.986090   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:12.986122   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:14.170350   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:16.670284   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:14.824727   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:17.324146   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:15.096736   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:17.596689   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:15.523428   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:15.536012   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:15.536070   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:15.569179   80857 cri.go:89] found id: ""
	I0717 18:43:15.569208   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.569218   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:15.569225   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:15.569273   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:15.606727   80857 cri.go:89] found id: ""
	I0717 18:43:15.606749   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.606757   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:15.606763   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:15.606805   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:15.638842   80857 cri.go:89] found id: ""
	I0717 18:43:15.638873   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.638883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:15.638889   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:15.638939   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:15.671418   80857 cri.go:89] found id: ""
	I0717 18:43:15.671444   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.671453   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:15.671459   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:15.671517   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:15.704892   80857 cri.go:89] found id: ""
	I0717 18:43:15.704928   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.704937   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:15.704956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:15.705013   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:15.738478   80857 cri.go:89] found id: ""
	I0717 18:43:15.738502   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.738509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:15.738515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:15.738584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:15.771188   80857 cri.go:89] found id: ""
	I0717 18:43:15.771225   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.771237   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:15.771245   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:15.771303   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:15.807737   80857 cri.go:89] found id: ""
	I0717 18:43:15.807763   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.807770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:15.807779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:15.807790   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:15.861202   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:15.861234   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:15.874170   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:15.874200   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:15.938049   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:15.938073   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:15.938086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:16.025420   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:16.025456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:18.563320   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:18.575574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:18.575634   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:18.608673   80857 cri.go:89] found id: ""
	I0717 18:43:18.608700   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.608710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:18.608718   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:18.608782   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:18.641589   80857 cri.go:89] found id: ""
	I0717 18:43:18.641611   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.641618   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:18.641624   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:18.641679   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:18.672232   80857 cri.go:89] found id: ""
	I0717 18:43:18.672258   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.672268   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:18.672274   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:18.672331   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:18.706088   80857 cri.go:89] found id: ""
	I0717 18:43:18.706111   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.706118   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:18.706134   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:18.706179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:18.742475   80857 cri.go:89] found id: ""
	I0717 18:43:18.742503   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.742512   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:18.742518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:18.742575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:18.774141   80857 cri.go:89] found id: ""
	I0717 18:43:18.774169   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.774178   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:18.774183   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:18.774234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:18.806648   80857 cri.go:89] found id: ""
	I0717 18:43:18.806672   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.806679   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:18.806685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:18.806731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:18.838022   80857 cri.go:89] found id: ""
	I0717 18:43:18.838047   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.838054   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:18.838062   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:18.838076   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:18.903467   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:18.903487   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:18.903498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:18.980385   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:18.980432   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:19.020884   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:19.020914   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:19.073530   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:19.073574   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:19.169841   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.172793   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:19.824764   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.826081   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:20.095275   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:22.097120   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.587870   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:21.602130   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:21.602185   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:21.635373   80857 cri.go:89] found id: ""
	I0717 18:43:21.635401   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.635411   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:21.635418   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:21.635480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:21.667175   80857 cri.go:89] found id: ""
	I0717 18:43:21.667200   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.667209   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:21.667216   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:21.667267   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:21.705876   80857 cri.go:89] found id: ""
	I0717 18:43:21.705907   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.705918   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:21.705926   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:21.705988   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:21.753302   80857 cri.go:89] found id: ""
	I0717 18:43:21.753323   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.753330   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:21.753337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:21.753388   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:21.785363   80857 cri.go:89] found id: ""
	I0717 18:43:21.785390   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.785396   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:21.785402   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:21.785448   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:21.817517   80857 cri.go:89] found id: ""
	I0717 18:43:21.817545   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.817553   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:21.817560   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:21.817615   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:21.849451   80857 cri.go:89] found id: ""
	I0717 18:43:21.849478   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.849489   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:21.849497   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:21.849553   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:21.880032   80857 cri.go:89] found id: ""
	I0717 18:43:21.880055   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.880063   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:21.880073   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:21.880086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:21.928498   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:21.928530   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:21.941532   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:21.941565   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:22.014044   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:22.014066   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:22.014081   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:22.090789   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:22.090817   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:24.628401   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:24.643571   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:24.643642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:24.679262   80857 cri.go:89] found id: ""
	I0717 18:43:24.679288   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.679297   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:24.679303   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:24.679360   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:24.713043   80857 cri.go:89] found id: ""
	I0717 18:43:24.713073   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.713085   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:24.713092   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:24.713145   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:24.751459   80857 cri.go:89] found id: ""
	I0717 18:43:24.751496   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.751508   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:24.751518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:24.751584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:24.790793   80857 cri.go:89] found id: ""
	I0717 18:43:24.790820   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.790831   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:24.790838   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:24.790895   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:24.822909   80857 cri.go:89] found id: ""
	I0717 18:43:24.822936   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.822945   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:24.822953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:24.823016   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:24.855369   80857 cri.go:89] found id: ""
	I0717 18:43:24.855418   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.855455   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:24.855468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:24.855557   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:24.891080   80857 cri.go:89] found id: ""
	I0717 18:43:24.891110   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.891127   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:24.891133   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:24.891187   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:24.923679   80857 cri.go:89] found id: ""
	I0717 18:43:24.923812   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.923833   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:24.923847   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:24.923863   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:24.975469   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:24.975499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:24.988671   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:24.988702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 18:43:23.670616   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.171013   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:24.323858   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.324395   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:28.325125   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:24.596495   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.597134   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:29.096334   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	W0717 18:43:25.055191   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:25.055210   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:25.055223   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:25.138867   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:25.138900   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:27.678822   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:27.691422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:27.691483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:27.723979   80857 cri.go:89] found id: ""
	I0717 18:43:27.724008   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.724016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:27.724022   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:27.724067   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:27.756389   80857 cri.go:89] found id: ""
	I0717 18:43:27.756415   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.756423   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:27.756429   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:27.756476   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:27.787617   80857 cri.go:89] found id: ""
	I0717 18:43:27.787644   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.787652   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:27.787658   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:27.787705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:27.821688   80857 cri.go:89] found id: ""
	I0717 18:43:27.821716   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.821725   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:27.821732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:27.821787   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:27.855353   80857 cri.go:89] found id: ""
	I0717 18:43:27.855378   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.855386   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:27.855392   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:27.855439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:27.887885   80857 cri.go:89] found id: ""
	I0717 18:43:27.887909   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.887917   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:27.887923   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:27.887984   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:27.918797   80857 cri.go:89] found id: ""
	I0717 18:43:27.918820   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.918828   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:27.918833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:27.918884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:27.951255   80857 cri.go:89] found id: ""
	I0717 18:43:27.951283   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.951295   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:27.951306   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:27.951319   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:28.025476   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:28.025506   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:28.063994   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:28.064020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:28.117762   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:28.117805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:28.135688   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:28.135725   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:28.238770   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:28.172438   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.670703   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:32.674896   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.824443   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:33.324216   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:31.595533   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:33.597968   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.739930   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:30.754147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:30.754231   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:30.794454   80857 cri.go:89] found id: ""
	I0717 18:43:30.794479   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.794486   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:30.794491   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:30.794548   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:30.831643   80857 cri.go:89] found id: ""
	I0717 18:43:30.831666   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.831673   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:30.831678   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:30.831731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:30.863293   80857 cri.go:89] found id: ""
	I0717 18:43:30.863315   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.863323   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:30.863337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:30.863395   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:30.897830   80857 cri.go:89] found id: ""
	I0717 18:43:30.897859   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.897870   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:30.897877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:30.897929   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:30.933179   80857 cri.go:89] found id: ""
	I0717 18:43:30.933209   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.933220   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:30.933227   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:30.933289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:30.964730   80857 cri.go:89] found id: ""
	I0717 18:43:30.964759   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.964773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:30.964781   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:30.964825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:30.996330   80857 cri.go:89] found id: ""
	I0717 18:43:30.996353   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.996361   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:30.996367   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:30.996419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:31.028193   80857 cri.go:89] found id: ""
	I0717 18:43:31.028220   80857 logs.go:276] 0 containers: []
	W0717 18:43:31.028228   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:31.028237   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:31.028251   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:31.040465   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:31.040490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:31.108127   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:31.108150   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:31.108164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:31.187763   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:31.187797   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:31.224238   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:31.224266   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:33.776145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:33.790045   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:33.790108   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:33.823471   80857 cri.go:89] found id: ""
	I0717 18:43:33.823495   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.823505   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:33.823512   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:33.823568   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:33.860205   80857 cri.go:89] found id: ""
	I0717 18:43:33.860233   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.860243   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:33.860250   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:33.860298   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:33.895469   80857 cri.go:89] found id: ""
	I0717 18:43:33.895499   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.895509   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:33.895516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:33.895578   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:33.938483   80857 cri.go:89] found id: ""
	I0717 18:43:33.938517   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.938527   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:33.938534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:33.938596   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:33.973265   80857 cri.go:89] found id: ""
	I0717 18:43:33.973293   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.973303   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:33.973309   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:33.973382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:34.012669   80857 cri.go:89] found id: ""
	I0717 18:43:34.012696   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.012704   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:34.012710   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:34.012760   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:34.045522   80857 cri.go:89] found id: ""
	I0717 18:43:34.045547   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.045557   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:34.045564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:34.045636   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:34.082927   80857 cri.go:89] found id: ""
	I0717 18:43:34.082957   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.082968   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:34.082979   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:34.082993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:34.134133   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:34.134168   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:34.146814   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:34.146837   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:34.217050   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:34.217079   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:34.217094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:34.298572   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:34.298610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:35.169868   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:37.170083   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:35.324578   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:37.825006   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:36.096437   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:38.096991   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:36.838187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:36.850888   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:36.850948   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:36.883132   80857 cri.go:89] found id: ""
	I0717 18:43:36.883153   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.883160   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:36.883166   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:36.883209   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:36.918310   80857 cri.go:89] found id: ""
	I0717 18:43:36.918339   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.918348   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:36.918353   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:36.918411   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:36.949794   80857 cri.go:89] found id: ""
	I0717 18:43:36.949818   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.949825   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:36.949831   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:36.949889   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:36.980913   80857 cri.go:89] found id: ""
	I0717 18:43:36.980951   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.980962   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:36.980969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:36.981029   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:37.014295   80857 cri.go:89] found id: ""
	I0717 18:43:37.014322   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.014330   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:37.014336   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:37.014397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:37.048555   80857 cri.go:89] found id: ""
	I0717 18:43:37.048581   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.048589   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:37.048595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:37.048643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:37.080533   80857 cri.go:89] found id: ""
	I0717 18:43:37.080561   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.080571   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:37.080577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:37.080640   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:37.112919   80857 cri.go:89] found id: ""
	I0717 18:43:37.112952   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.112963   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:37.112973   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:37.112987   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:37.165012   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:37.165044   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:37.177860   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:37.177881   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:37.244776   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:37.244806   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:37.244824   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:37.322949   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:37.322976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:39.861056   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:39.884509   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:39.884592   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:39.931317   80857 cri.go:89] found id: ""
	I0717 18:43:39.931341   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.931348   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:39.931354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:39.931410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:39.971571   80857 cri.go:89] found id: ""
	I0717 18:43:39.971615   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.971626   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:39.971634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:39.971692   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:40.003851   80857 cri.go:89] found id: ""
	I0717 18:43:40.003875   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.003883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:40.003891   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:40.003942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:40.040403   80857 cri.go:89] found id: ""
	I0717 18:43:40.040430   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.040440   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:40.040445   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:40.040498   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:39.669960   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.170056   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.325792   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.824332   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.596935   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.597153   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.071893   80857 cri.go:89] found id: ""
	I0717 18:43:40.071919   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.071927   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:40.071932   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:40.071979   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:40.111020   80857 cri.go:89] found id: ""
	I0717 18:43:40.111042   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.111052   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:40.111059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:40.111117   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:40.142872   80857 cri.go:89] found id: ""
	I0717 18:43:40.142899   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.142910   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:40.142917   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:40.142975   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:40.179919   80857 cri.go:89] found id: ""
	I0717 18:43:40.179944   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.179953   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:40.179963   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:40.179980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:40.233033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:40.233075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:40.246272   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:40.246299   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:40.311988   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:40.312014   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:40.312033   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:40.395622   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:40.395658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:42.935843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:42.949893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:42.949957   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:42.982429   80857 cri.go:89] found id: ""
	I0717 18:43:42.982451   80857 logs.go:276] 0 containers: []
	W0717 18:43:42.982459   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:42.982464   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:42.982512   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:43.018637   80857 cri.go:89] found id: ""
	I0717 18:43:43.018659   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.018666   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:43.018672   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:43.018719   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:43.054274   80857 cri.go:89] found id: ""
	I0717 18:43:43.054301   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.054310   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:43.054317   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:43.054368   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:43.093382   80857 cri.go:89] found id: ""
	I0717 18:43:43.093408   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.093418   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:43.093425   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:43.093484   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:43.125830   80857 cri.go:89] found id: ""
	I0717 18:43:43.125862   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.125871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:43.125878   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:43.125936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:43.157110   80857 cri.go:89] found id: ""
	I0717 18:43:43.157138   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.157147   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:43.157154   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:43.157215   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:43.188320   80857 cri.go:89] found id: ""
	I0717 18:43:43.188342   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.188349   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:43.188354   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:43.188400   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:43.220650   80857 cri.go:89] found id: ""
	I0717 18:43:43.220679   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.220686   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:43.220695   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:43.220707   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:43.259320   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:43.259358   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:43.308308   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:43.308346   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:43.321865   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:43.321894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:43.396110   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:43.396135   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:43.396147   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:44.670206   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.169748   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.323427   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.324066   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.096564   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.105605   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.976091   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:45.988956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:45.989015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:46.022277   80857 cri.go:89] found id: ""
	I0717 18:43:46.022307   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.022318   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:46.022325   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:46.022398   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:46.057607   80857 cri.go:89] found id: ""
	I0717 18:43:46.057636   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.057646   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:46.057653   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:46.057712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:46.089275   80857 cri.go:89] found id: ""
	I0717 18:43:46.089304   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.089313   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:46.089321   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:46.089378   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:46.123686   80857 cri.go:89] found id: ""
	I0717 18:43:46.123717   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.123726   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:46.123731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:46.123784   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:46.166600   80857 cri.go:89] found id: ""
	I0717 18:43:46.166628   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.166638   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:46.166645   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:46.166704   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:46.202518   80857 cri.go:89] found id: ""
	I0717 18:43:46.202543   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.202562   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:46.202568   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:46.202612   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:46.234573   80857 cri.go:89] found id: ""
	I0717 18:43:46.234608   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.234620   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:46.234627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:46.234687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:46.265305   80857 cri.go:89] found id: ""
	I0717 18:43:46.265333   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.265343   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:46.265355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:46.265369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:46.342963   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:46.342993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:46.377170   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:46.377208   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:46.429641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:46.429673   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:46.442168   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:46.442195   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:46.516656   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.016877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:49.030308   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:49.030375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:49.062400   80857 cri.go:89] found id: ""
	I0717 18:43:49.062423   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.062430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:49.062435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:49.062486   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:49.097110   80857 cri.go:89] found id: ""
	I0717 18:43:49.097131   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.097137   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:49.097142   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:49.097190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:49.128535   80857 cri.go:89] found id: ""
	I0717 18:43:49.128558   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.128571   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:49.128577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:49.128626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:49.162505   80857 cri.go:89] found id: ""
	I0717 18:43:49.162530   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.162538   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:49.162544   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:49.162594   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:49.194912   80857 cri.go:89] found id: ""
	I0717 18:43:49.194939   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.194950   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:49.194957   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:49.195025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:49.227055   80857 cri.go:89] found id: ""
	I0717 18:43:49.227083   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.227092   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:49.227098   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:49.227147   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:49.259568   80857 cri.go:89] found id: ""
	I0717 18:43:49.259596   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.259607   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:49.259618   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:49.259673   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:49.291700   80857 cri.go:89] found id: ""
	I0717 18:43:49.291727   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.291735   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:49.291744   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:49.291755   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:49.344600   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:49.344636   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:49.357680   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:49.357705   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:49.427160   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.427180   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:49.427192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:49.504151   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:49.504182   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:49.170632   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.170953   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:49.324205   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.823181   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:53.824989   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:49.596298   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.596383   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:54.097260   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:52.041591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:52.054775   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:52.054841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:52.085858   80857 cri.go:89] found id: ""
	I0717 18:43:52.085892   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.085904   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:52.085911   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:52.085961   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:52.124100   80857 cri.go:89] found id: ""
	I0717 18:43:52.124122   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.124130   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:52.124135   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:52.124195   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:52.155056   80857 cri.go:89] found id: ""
	I0717 18:43:52.155079   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.155087   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:52.155093   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:52.155154   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:52.189318   80857 cri.go:89] found id: ""
	I0717 18:43:52.189349   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.189359   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:52.189366   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:52.189430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:52.222960   80857 cri.go:89] found id: ""
	I0717 18:43:52.222988   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.222999   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:52.223006   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:52.223071   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:52.255807   80857 cri.go:89] found id: ""
	I0717 18:43:52.255834   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.255841   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:52.255847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:52.255904   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:52.286596   80857 cri.go:89] found id: ""
	I0717 18:43:52.286628   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.286641   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:52.286648   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:52.286703   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:52.319607   80857 cri.go:89] found id: ""
	I0717 18:43:52.319632   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.319641   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:52.319652   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:52.319666   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:52.371270   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:52.371301   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:52.384771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:52.384803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:52.456408   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:52.456432   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:52.456444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:52.533724   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:52.533759   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:53.171080   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:55.669642   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:56.324311   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:58.823693   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:56.595916   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:58.597526   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:55.072554   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:55.087005   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:55.087086   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:55.123300   80857 cri.go:89] found id: ""
	I0717 18:43:55.123325   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.123331   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:55.123336   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:55.123390   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:55.158476   80857 cri.go:89] found id: ""
	I0717 18:43:55.158502   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.158509   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:55.158515   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:55.158572   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:55.198489   80857 cri.go:89] found id: ""
	I0717 18:43:55.198511   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.198518   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:55.198524   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:55.198567   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:55.230901   80857 cri.go:89] found id: ""
	I0717 18:43:55.230933   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.230943   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:55.230951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:55.231028   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:55.262303   80857 cri.go:89] found id: ""
	I0717 18:43:55.262326   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.262333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:55.262340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:55.262393   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:55.293889   80857 cri.go:89] found id: ""
	I0717 18:43:55.293916   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.293925   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:55.293930   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:55.293983   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:55.325695   80857 cri.go:89] found id: ""
	I0717 18:43:55.325720   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.325727   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:55.325737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:55.325797   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:55.360021   80857 cri.go:89] found id: ""
	I0717 18:43:55.360044   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.360052   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:55.360059   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:55.360075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:55.372088   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:55.372111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:55.442073   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:55.442101   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:55.442116   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:55.521733   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:55.521763   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:55.558914   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:55.558947   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.114001   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:58.126283   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:58.126353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:58.162769   80857 cri.go:89] found id: ""
	I0717 18:43:58.162800   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.162810   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:58.162815   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:58.162862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:58.197359   80857 cri.go:89] found id: ""
	I0717 18:43:58.197386   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.197397   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:58.197404   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:58.197465   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:58.229662   80857 cri.go:89] found id: ""
	I0717 18:43:58.229691   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.229700   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:58.229707   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:58.229766   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:58.261810   80857 cri.go:89] found id: ""
	I0717 18:43:58.261832   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.261838   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:58.261844   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:58.261900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:58.293243   80857 cri.go:89] found id: ""
	I0717 18:43:58.293271   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.293282   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:58.293290   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:58.293353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:58.325689   80857 cri.go:89] found id: ""
	I0717 18:43:58.325714   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.325724   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:58.325731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:58.325785   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:58.357381   80857 cri.go:89] found id: ""
	I0717 18:43:58.357406   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.357416   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:58.357422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:58.357483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:58.389859   80857 cri.go:89] found id: ""
	I0717 18:43:58.389888   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.389900   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:58.389910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:58.389926   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:58.458034   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:58.458058   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:58.458072   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:58.536134   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:58.536164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:58.573808   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:58.573834   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.624956   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:58.624985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:58.170810   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:00.670184   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:02.671370   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:00.824682   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:02.824874   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:01.096294   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:03.096348   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:01.138486   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:01.151547   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:01.151610   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:01.186397   80857 cri.go:89] found id: ""
	I0717 18:44:01.186422   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.186430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:01.186435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:01.186487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:01.220797   80857 cri.go:89] found id: ""
	I0717 18:44:01.220822   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.220830   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:01.220849   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:01.220894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:01.257640   80857 cri.go:89] found id: ""
	I0717 18:44:01.257666   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.257674   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:01.257680   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:01.257727   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:01.295393   80857 cri.go:89] found id: ""
	I0717 18:44:01.295418   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.295425   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:01.295432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:01.295493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:01.327242   80857 cri.go:89] found id: ""
	I0717 18:44:01.327261   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.327268   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:01.327273   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:01.327319   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:01.358559   80857 cri.go:89] found id: ""
	I0717 18:44:01.358586   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.358593   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:01.358599   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:01.358647   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:01.392301   80857 cri.go:89] found id: ""
	I0717 18:44:01.392332   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.392341   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:01.392346   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:01.392407   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:01.424422   80857 cri.go:89] found id: ""
	I0717 18:44:01.424449   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.424457   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:01.424465   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:01.424477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:01.473298   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:01.473332   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:01.487444   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:01.487471   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:01.552548   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:01.552572   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:01.552586   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:01.634203   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:01.634242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:04.175618   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:04.188071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:04.188150   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:04.222149   80857 cri.go:89] found id: ""
	I0717 18:44:04.222173   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.222180   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:04.222185   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:04.222242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:04.257174   80857 cri.go:89] found id: ""
	I0717 18:44:04.257211   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.257223   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:04.257232   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:04.257284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:04.291628   80857 cri.go:89] found id: ""
	I0717 18:44:04.291653   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.291666   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:04.291673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:04.291733   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:04.325935   80857 cri.go:89] found id: ""
	I0717 18:44:04.325964   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.325975   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:04.325982   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:04.326043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:04.356610   80857 cri.go:89] found id: ""
	I0717 18:44:04.356638   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.356648   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:04.356655   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:04.356712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:04.387728   80857 cri.go:89] found id: ""
	I0717 18:44:04.387764   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.387773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:04.387782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:04.387840   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:04.421452   80857 cri.go:89] found id: ""
	I0717 18:44:04.421479   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.421488   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:04.421495   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:04.421555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:04.453111   80857 cri.go:89] found id: ""
	I0717 18:44:04.453139   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.453150   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:04.453161   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:04.453175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:04.506185   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:04.506215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:04.523611   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:04.523638   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:04.591051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:04.591074   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:04.591091   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:04.666603   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:04.666647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:05.169836   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.170112   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:05.324886   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.325488   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:05.096545   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.598131   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.205208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:07.218182   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:07.218236   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:07.254521   80857 cri.go:89] found id: ""
	I0717 18:44:07.254554   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.254565   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:07.254571   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:07.254638   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:07.293622   80857 cri.go:89] found id: ""
	I0717 18:44:07.293650   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.293658   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:07.293663   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:07.293711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:07.331056   80857 cri.go:89] found id: ""
	I0717 18:44:07.331083   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.331091   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:07.331097   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:07.331157   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:07.368445   80857 cri.go:89] found id: ""
	I0717 18:44:07.368476   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.368484   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:07.368491   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:07.368541   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:07.405507   80857 cri.go:89] found id: ""
	I0717 18:44:07.405539   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.405550   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:07.405557   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:07.405617   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:07.444752   80857 cri.go:89] found id: ""
	I0717 18:44:07.444782   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.444792   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:07.444801   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:07.444859   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:07.486976   80857 cri.go:89] found id: ""
	I0717 18:44:07.487006   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.487016   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:07.487024   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:07.487073   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:07.522561   80857 cri.go:89] found id: ""
	I0717 18:44:07.522590   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.522599   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:07.522607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:07.522618   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:07.576350   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:07.576382   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:07.591491   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:07.591517   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:07.659860   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:07.659886   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:07.659902   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:07.743445   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:07.743478   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:09.170601   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:11.170851   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:09.824120   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:11.826838   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:10.097009   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:12.596778   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:10.284468   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:10.296549   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:10.296608   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:10.331209   80857 cri.go:89] found id: ""
	I0717 18:44:10.331236   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.331246   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:10.331252   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:10.331297   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:10.363911   80857 cri.go:89] found id: ""
	I0717 18:44:10.363941   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.363949   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:10.363954   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:10.364001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:10.395935   80857 cri.go:89] found id: ""
	I0717 18:44:10.395960   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.395970   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:10.395977   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:10.396021   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:10.428307   80857 cri.go:89] found id: ""
	I0717 18:44:10.428337   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.428344   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:10.428351   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:10.428397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:10.459615   80857 cri.go:89] found id: ""
	I0717 18:44:10.459643   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.459654   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:10.459661   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:10.459715   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:10.491593   80857 cri.go:89] found id: ""
	I0717 18:44:10.491617   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.491628   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:10.491636   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:10.491693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:10.526822   80857 cri.go:89] found id: ""
	I0717 18:44:10.526846   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.526853   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:10.526858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:10.526918   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:10.561037   80857 cri.go:89] found id: ""
	I0717 18:44:10.561066   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.561077   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:10.561087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:10.561101   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:10.643333   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:10.643364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:10.684673   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:10.684704   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:10.736191   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:10.736220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:10.748762   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:10.748793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:10.812121   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.313033   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:13.325692   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:13.325756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:13.358306   80857 cri.go:89] found id: ""
	I0717 18:44:13.358336   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.358345   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:13.358352   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:13.358410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:13.393233   80857 cri.go:89] found id: ""
	I0717 18:44:13.393264   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.393274   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:13.393282   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:13.393340   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:13.424256   80857 cri.go:89] found id: ""
	I0717 18:44:13.424287   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.424298   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:13.424305   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:13.424358   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:13.454988   80857 cri.go:89] found id: ""
	I0717 18:44:13.455010   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.455018   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:13.455023   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:13.455069   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:13.491019   80857 cri.go:89] found id: ""
	I0717 18:44:13.491046   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.491054   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:13.491060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:13.491107   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:13.523045   80857 cri.go:89] found id: ""
	I0717 18:44:13.523070   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.523079   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:13.523085   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:13.523131   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:13.555442   80857 cri.go:89] found id: ""
	I0717 18:44:13.555470   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.555483   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:13.555489   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:13.555549   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:13.588891   80857 cri.go:89] found id: ""
	I0717 18:44:13.588921   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.588931   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:13.588958   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:13.588973   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:13.663635   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.663659   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:13.663674   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:13.749098   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:13.749135   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:13.785489   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:13.785524   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:13.837098   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:13.837128   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:13.671215   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:15.671282   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:17.671466   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:14.324573   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:16.826063   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:15.095967   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:17.096403   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:19.096478   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:16.350571   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:16.364398   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:16.364470   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:16.400677   80857 cri.go:89] found id: ""
	I0717 18:44:16.400708   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.400719   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:16.400726   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:16.400781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:16.431715   80857 cri.go:89] found id: ""
	I0717 18:44:16.431743   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.431754   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:16.431760   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:16.431836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:16.465115   80857 cri.go:89] found id: ""
	I0717 18:44:16.465148   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.465160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:16.465167   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:16.465230   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:16.497906   80857 cri.go:89] found id: ""
	I0717 18:44:16.497933   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.497944   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:16.497952   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:16.498008   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:16.534066   80857 cri.go:89] found id: ""
	I0717 18:44:16.534097   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.534108   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:16.534116   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:16.534173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:16.566679   80857 cri.go:89] found id: ""
	I0717 18:44:16.566706   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.566717   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:16.566724   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:16.566781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:16.598397   80857 cri.go:89] found id: ""
	I0717 18:44:16.598416   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.598422   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:16.598427   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:16.598480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:16.629943   80857 cri.go:89] found id: ""
	I0717 18:44:16.629975   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.629998   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:16.630017   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:16.630032   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:16.706452   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:16.706489   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:16.744971   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:16.745003   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:16.796450   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:16.796477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:16.809192   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:16.809217   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:16.875699   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.376821   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:19.389921   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:19.389980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:19.423837   80857 cri.go:89] found id: ""
	I0717 18:44:19.423862   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.423870   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:19.423877   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:19.423934   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:19.468267   80857 cri.go:89] found id: ""
	I0717 18:44:19.468293   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.468305   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:19.468311   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:19.468371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:19.503286   80857 cri.go:89] found id: ""
	I0717 18:44:19.503315   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.503326   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:19.503333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:19.503391   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:19.535505   80857 cri.go:89] found id: ""
	I0717 18:44:19.535531   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.535542   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:19.535548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:19.535607   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:19.568678   80857 cri.go:89] found id: ""
	I0717 18:44:19.568704   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.568711   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:19.568717   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:19.568762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:19.604027   80857 cri.go:89] found id: ""
	I0717 18:44:19.604053   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.604064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:19.604071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:19.604127   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:19.637357   80857 cri.go:89] found id: ""
	I0717 18:44:19.637387   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.637397   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:19.637403   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:19.637450   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:19.669094   80857 cri.go:89] found id: ""
	I0717 18:44:19.669126   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.669136   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:19.669145   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:19.669160   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:19.720218   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:19.720248   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:19.733320   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:19.733343   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:19.796229   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.796252   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:19.796267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:19.871157   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:19.871186   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:20.170824   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:22.670239   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:19.324037   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:21.324408   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:23.824030   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:21.098734   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:23.595859   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:22.409012   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:22.421477   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:22.421546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:22.457314   80857 cri.go:89] found id: ""
	I0717 18:44:22.457337   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.457346   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:22.457354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:22.457410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:22.490998   80857 cri.go:89] found id: ""
	I0717 18:44:22.491022   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.491030   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:22.491037   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:22.491090   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:22.523904   80857 cri.go:89] found id: ""
	I0717 18:44:22.523934   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.523945   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:22.523953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:22.524012   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:22.555917   80857 cri.go:89] found id: ""
	I0717 18:44:22.555947   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.555956   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:22.555962   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:22.556026   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:22.588510   80857 cri.go:89] found id: ""
	I0717 18:44:22.588552   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.588565   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:22.588574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:22.588652   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:22.621854   80857 cri.go:89] found id: ""
	I0717 18:44:22.621883   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.621893   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:22.621901   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:22.621956   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:22.653897   80857 cri.go:89] found id: ""
	I0717 18:44:22.653921   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.653931   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:22.653938   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:22.654001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:22.685731   80857 cri.go:89] found id: ""
	I0717 18:44:22.685760   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.685770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:22.685779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:22.685792   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:22.735514   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:22.735545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:22.748148   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:22.748169   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:22.809637   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:22.809666   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:22.809682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:22.886014   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:22.886050   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:24.670825   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:27.169930   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.824694   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:28.324620   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.597423   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:28.095788   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.431906   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:25.444866   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:25.444965   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:25.477211   80857 cri.go:89] found id: ""
	I0717 18:44:25.477245   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.477257   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:25.477264   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:25.477366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:25.512077   80857 cri.go:89] found id: ""
	I0717 18:44:25.512108   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.512120   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:25.512127   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:25.512177   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:25.543953   80857 cri.go:89] found id: ""
	I0717 18:44:25.543974   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.543981   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:25.543987   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:25.544032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:25.574955   80857 cri.go:89] found id: ""
	I0717 18:44:25.574980   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.574990   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:25.574997   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:25.575054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:25.607078   80857 cri.go:89] found id: ""
	I0717 18:44:25.607106   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.607117   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:25.607125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:25.607188   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:25.643129   80857 cri.go:89] found id: ""
	I0717 18:44:25.643152   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.643162   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:25.643169   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:25.643225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:25.678220   80857 cri.go:89] found id: ""
	I0717 18:44:25.678241   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.678249   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:25.678254   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:25.678309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:25.715405   80857 cri.go:89] found id: ""
	I0717 18:44:25.715433   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.715446   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:25.715458   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:25.715474   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:25.772978   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:25.773008   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:25.786559   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:25.786587   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:25.853369   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:25.853386   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:25.853398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:25.954346   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:25.954398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:28.498591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:28.511701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:28.511762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:28.543527   80857 cri.go:89] found id: ""
	I0717 18:44:28.543551   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.543559   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:28.543565   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:28.543624   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:28.574737   80857 cri.go:89] found id: ""
	I0717 18:44:28.574762   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.574769   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:28.574776   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:28.574835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:28.608129   80857 cri.go:89] found id: ""
	I0717 18:44:28.608166   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.608174   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:28.608179   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:28.608234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:28.644324   80857 cri.go:89] found id: ""
	I0717 18:44:28.644348   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.644357   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:28.644371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:28.644426   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:28.675830   80857 cri.go:89] found id: ""
	I0717 18:44:28.675859   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.675870   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:28.675877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:28.675937   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:28.705713   80857 cri.go:89] found id: ""
	I0717 18:44:28.705749   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.705760   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:28.705768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:28.705821   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:28.738648   80857 cri.go:89] found id: ""
	I0717 18:44:28.738677   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.738688   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:28.738695   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:28.738752   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:28.768877   80857 cri.go:89] found id: ""
	I0717 18:44:28.768906   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.768916   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:28.768927   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:28.768953   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:28.818951   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:28.818985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:28.832813   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:28.832843   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:28.910030   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:28.910051   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:28.910063   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:28.986706   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:28.986743   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:29.170559   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:31.669543   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:30.824906   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:33.324261   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:30.096916   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:32.597522   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:31.529154   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:31.543261   80857 kubeadm.go:597] duration metric: took 4m4.346231712s to restartPrimaryControlPlane
	W0717 18:44:31.543327   80857 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:44:31.543350   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:44:33.670602   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:36.169669   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:35.325082   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:37.824371   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:35.096445   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:37.097375   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:39.098005   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:36.752008   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.208633612s)
	I0717 18:44:36.752076   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:44:36.765411   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:44:36.774556   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:44:36.783406   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:44:36.783427   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:44:36.783479   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:44:36.791953   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:44:36.792007   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:44:36.800929   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:44:36.808988   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:44:36.809049   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:44:36.817312   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.825586   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:44:36.825648   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.834783   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:44:36.843109   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:44:36.843166   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:44:36.852276   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:44:37.058251   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:44:38.170695   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.671193   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.324181   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.818959   80401 pod_ready.go:81] duration metric: took 4m0.000961975s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" ...
	E0717 18:44:40.818998   80401 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:44:40.819017   80401 pod_ready.go:38] duration metric: took 4m12.045669741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:44:40.819042   80401 kubeadm.go:597] duration metric: took 4m22.276381575s to restartPrimaryControlPlane
	W0717 18:44:40.819091   80401 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:44:40.819116   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:44:41.597013   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:44.097096   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:43.170145   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:45.670626   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:46.595570   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:48.598459   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:48.169822   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:50.170686   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:52.670255   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:51.097591   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:53.597467   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:55.170853   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:57.670157   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:56.096506   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:58.107493   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:00.170210   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:02.672286   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:00.596747   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:02.590517   81068 pod_ready.go:81] duration metric: took 4m0.000120095s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" ...
	E0717 18:45:02.590549   81068 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:45:02.590572   81068 pod_ready.go:38] duration metric: took 4m10.536894511s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:02.590607   81068 kubeadm.go:597] duration metric: took 4m18.045314131s to restartPrimaryControlPlane
	W0717 18:45:02.590672   81068 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:45:02.590702   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:45:06.920900   80401 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.10175503s)
	I0717 18:45:06.921009   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:06.952090   80401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:06.962820   80401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:06.979545   80401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:06.979577   80401 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:06.979641   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:45:06.990493   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:06.990574   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:07.014934   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:45:07.024381   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:07.024449   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:07.033573   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:45:07.042495   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:07.042552   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:07.051233   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:45:07.059616   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:07.059674   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:07.068348   80401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:07.112042   80401 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 18:45:07.112188   80401 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:45:07.229262   80401 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:45:07.229356   80401 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:45:07.229491   80401 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 18:45:07.239251   80401 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:45:05.171753   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:07.669753   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:07.241949   80401 out.go:204]   - Generating certificates and keys ...
	I0717 18:45:07.242054   80401 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:45:07.242150   80401 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:45:07.242253   80401 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:45:07.242355   80401 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:45:07.242459   80401 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:45:07.242536   80401 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:45:07.242620   80401 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:45:07.242721   80401 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:45:07.242835   80401 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:45:07.242937   80401 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:45:07.242998   80401 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:45:07.243068   80401 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:45:07.641462   80401 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:45:07.705768   80401 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:45:07.821102   80401 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:45:07.898702   80401 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:45:08.107470   80401 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:45:08.107945   80401 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:45:08.111615   80401 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:45:08.113464   80401 out.go:204]   - Booting up control plane ...
	I0717 18:45:08.113572   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:45:08.113695   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:45:08.113843   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:45:08.131411   80401 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:45:08.137563   80401 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:45:08.137622   80401 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:45:08.268403   80401 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:45:08.268519   80401 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:45:08.769158   80401 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.386396ms
	I0717 18:45:08.769265   80401 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:45:09.669968   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:11.670466   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:13.771873   80401 kubeadm.go:310] [api-check] The API server is healthy after 5.002458706s
	I0717 18:45:13.789581   80401 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:45:13.804268   80401 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:45:13.831438   80401 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:45:13.831641   80401 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-066175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:45:13.845165   80401 kubeadm.go:310] [bootstrap-token] Using token: fscs12.0o2n9pl0vxdw75m1
	I0717 18:45:13.846851   80401 out.go:204]   - Configuring RBAC rules ...
	I0717 18:45:13.847002   80401 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:45:13.854788   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:45:13.866828   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:45:13.871541   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:45:13.875508   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:45:13.880068   80401 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:45:14.179824   80401 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:45:14.669946   80401 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:45:15.180053   80401 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:45:15.180076   80401 kubeadm.go:310] 
	I0717 18:45:15.180180   80401 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:45:15.180201   80401 kubeadm.go:310] 
	I0717 18:45:15.180287   80401 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:45:15.180295   80401 kubeadm.go:310] 
	I0717 18:45:15.180348   80401 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:45:15.180437   80401 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:45:15.180517   80401 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:45:15.180530   80401 kubeadm.go:310] 
	I0717 18:45:15.180607   80401 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:45:15.180617   80401 kubeadm.go:310] 
	I0717 18:45:15.180682   80401 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:45:15.180692   80401 kubeadm.go:310] 
	I0717 18:45:15.180775   80401 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:45:15.180871   80401 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:45:15.180984   80401 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:45:15.180996   80401 kubeadm.go:310] 
	I0717 18:45:15.181107   80401 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:45:15.181221   80401 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:45:15.181234   80401 kubeadm.go:310] 
	I0717 18:45:15.181370   80401 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fscs12.0o2n9pl0vxdw75m1 \
	I0717 18:45:15.181523   80401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:45:15.181571   80401 kubeadm.go:310] 	--control-plane 
	I0717 18:45:15.181579   80401 kubeadm.go:310] 
	I0717 18:45:15.181679   80401 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:45:15.181690   80401 kubeadm.go:310] 
	I0717 18:45:15.181802   80401 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fscs12.0o2n9pl0vxdw75m1 \
	I0717 18:45:15.181954   80401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:45:15.182460   80401 kubeadm.go:310] W0717 18:45:07.084606    2905 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:45:15.182848   80401 kubeadm.go:310] W0717 18:45:07.085710    2905 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:45:15.183017   80401 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:15.183038   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:45:15.183048   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:45:15.185022   80401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:45:13.671267   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:15.671682   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:15.186444   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:45:15.197514   80401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:45:15.216000   80401 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:45:15.216097   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:15.216157   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-066175 minikube.k8s.io/updated_at=2024_07_17T18_45_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=no-preload-066175 minikube.k8s.io/primary=true
	I0717 18:45:15.251049   80401 ops.go:34] apiserver oom_adj: -16
	I0717 18:45:15.383234   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:15.884265   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:16.384075   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:16.883375   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:17.383864   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:17.884072   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:18.383283   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:18.883644   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:19.384366   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:19.507413   80401 kubeadm.go:1113] duration metric: took 4.291369352s to wait for elevateKubeSystemPrivileges
	I0717 18:45:19.507450   80401 kubeadm.go:394] duration metric: took 5m1.019320853s to StartCluster
	I0717 18:45:19.507473   80401 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:19.507570   80401 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:45:19.510004   80401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:19.510329   80401 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:45:19.510401   80401 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:45:19.510484   80401 addons.go:69] Setting storage-provisioner=true in profile "no-preload-066175"
	I0717 18:45:19.510515   80401 addons.go:234] Setting addon storage-provisioner=true in "no-preload-066175"
	W0717 18:45:19.510523   80401 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:45:19.510530   80401 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:45:19.510531   80401 addons.go:69] Setting default-storageclass=true in profile "no-preload-066175"
	I0717 18:45:19.510553   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.510551   80401 addons.go:69] Setting metrics-server=true in profile "no-preload-066175"
	I0717 18:45:19.510572   80401 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-066175"
	I0717 18:45:19.510586   80401 addons.go:234] Setting addon metrics-server=true in "no-preload-066175"
	W0717 18:45:19.510596   80401 addons.go:243] addon metrics-server should already be in state true
	I0717 18:45:19.510628   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.510986   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.510986   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.511027   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.511047   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.511075   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.511102   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.512057   80401 out.go:177] * Verifying Kubernetes components...
	I0717 18:45:19.513662   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:45:19.532038   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40719
	I0717 18:45:19.532059   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45825
	I0717 18:45:19.532048   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0717 18:45:19.532557   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.532562   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.532701   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.533086   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533107   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533246   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533261   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533276   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533295   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533455   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533671   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533732   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533851   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.533933   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.533958   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.534280   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.534310   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.537749   80401 addons.go:234] Setting addon default-storageclass=true in "no-preload-066175"
	W0717 18:45:19.537773   80401 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:45:19.537804   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.538168   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.538206   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.550488   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I0717 18:45:19.551013   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.551625   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.551647   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.552005   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.552335   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.553613   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I0717 18:45:19.553633   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0717 18:45:19.554184   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.554243   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.554271   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.554784   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.554801   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.554965   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.554986   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.555220   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.555350   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.555393   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.555995   80401 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:45:19.556103   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.556229   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.556825   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.557482   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:45:19.557499   80401 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:45:19.557517   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.558437   80401 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:45:19.560069   80401 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:19.560084   80401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:45:19.560100   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.560881   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.560908   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.560932   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.561265   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.561477   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.561633   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.561732   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.563601   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.564025   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.564197   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.564219   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.564378   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.564549   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.564686   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.579324   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37271
	I0717 18:45:19.579786   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.580331   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.580354   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.580697   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.580925   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.582700   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.582910   80401 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:19.582923   80401 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:45:19.582936   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.585938   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.586387   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.586414   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.586605   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.586758   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.586920   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.587061   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.706369   80401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:45:19.727936   80401 node_ready.go:35] waiting up to 6m0s for node "no-preload-066175" to be "Ready" ...
	I0717 18:45:19.738822   80401 node_ready.go:49] node "no-preload-066175" has status "Ready":"True"
	I0717 18:45:19.738841   80401 node_ready.go:38] duration metric: took 10.872501ms for node "no-preload-066175" to be "Ready" ...
	I0717 18:45:19.738852   80401 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:19.744979   80401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:19.854180   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:19.873723   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:45:19.873746   80401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:45:19.883867   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:19.902041   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:45:19.902064   80401 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:45:19.926788   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:19.926867   80401 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:45:19.953788   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:20.571091   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.571119   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571119   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.571137   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571394   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.571439   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.571456   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.571463   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.571459   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.572575   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571494   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.572789   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.572761   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.572804   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.572815   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.572824   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.573027   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.573044   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.589595   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.589614   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.589913   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.589940   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.589918   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.789754   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.789776   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.790082   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.790103   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.790113   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.790123   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.790416   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.790457   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.790470   80401 addons.go:475] Verifying addon metrics-server=true in "no-preload-066175"
	I0717 18:45:20.790416   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.792175   80401 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 18:45:18.169876   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:20.170261   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:22.664656   80180 pod_ready.go:81] duration metric: took 4m0.000669682s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" ...
	E0717 18:45:22.664696   80180 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:45:22.664716   80180 pod_ready.go:38] duration metric: took 4m9.027997903s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:22.664746   80180 kubeadm.go:597] duration metric: took 4m19.955287366s to restartPrimaryControlPlane
	W0717 18:45:22.664823   80180 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:45:22.664854   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:45:20.793543   80401 addons.go:510] duration metric: took 1.283145408s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 18:45:21.766367   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:24.252243   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:24.771415   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:24.771443   80401 pod_ready.go:81] duration metric: took 5.026437249s for pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:24.771457   80401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:26.777371   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:28.778629   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:31.277550   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:31.792126   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.792154   80401 pod_ready.go:81] duration metric: took 7.020687724s for pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.792168   80401 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.798687   80401 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.798708   80401 pod_ready.go:81] duration metric: took 6.534344ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.798717   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.803428   80401 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.803452   80401 pod_ready.go:81] duration metric: took 4.727536ms for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.803464   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.815053   80401 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.815078   80401 pod_ready.go:81] duration metric: took 11.60679ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.815092   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rgp5c" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.824126   80401 pod_ready.go:92] pod "kube-proxy-rgp5c" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.824151   80401 pod_ready.go:81] duration metric: took 9.050394ms for pod "kube-proxy-rgp5c" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.824163   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:32.176378   80401 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:32.176404   80401 pod_ready.go:81] duration metric: took 352.232802ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:32.176414   80401 pod_ready.go:38] duration metric: took 12.437548785s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:32.176430   80401 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:45:32.176492   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:45:32.190918   80401 api_server.go:72] duration metric: took 12.680546008s to wait for apiserver process to appear ...
	I0717 18:45:32.190942   80401 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:45:32.190963   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:45:32.196011   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:45:32.197004   80401 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:45:32.197024   80401 api_server.go:131] duration metric: took 6.075734ms to wait for apiserver health ...
	I0717 18:45:32.197033   80401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:45:32.379383   80401 system_pods.go:59] 9 kube-system pods found
	I0717 18:45:32.379412   80401 system_pods.go:61] "coredns-5cfdc65f69-r9xns" [29624b73-848d-4a35-96bc-92f9627842fe] Running
	I0717 18:45:32.379416   80401 system_pods.go:61] "coredns-5cfdc65f69-tx7nc" [085ec394-1ca7-4b9b-9b54-b4fdab45bd75] Running
	I0717 18:45:32.379420   80401 system_pods.go:61] "etcd-no-preload-066175" [6086cbd0-137f-428e-8131-4d57b8823912] Running
	I0717 18:45:32.379423   80401 system_pods.go:61] "kube-apiserver-no-preload-066175" [c1913fea-3c1b-4563-ac80-ee1224b23a35] Running
	I0717 18:45:32.379427   80401 system_pods.go:61] "kube-controller-manager-no-preload-066175" [f6dd2ea0-be8f-4c8c-89b0-57fed0d618fd] Running
	I0717 18:45:32.379431   80401 system_pods.go:61] "kube-proxy-rgp5c" [7aaedb8f-b248-43ac-bd49-4f97d26aa1f6] Running
	I0717 18:45:32.379433   80401 system_pods.go:61] "kube-scheduler-no-preload-066175" [406fae53-d382-42c0-90db-ff9c57ccda8b] Running
	I0717 18:45:32.379439   80401 system_pods.go:61] "metrics-server-78fcd8795b-kj29z" [4b99bc9f-b5a7-4e86-b3ba-2607f9840957] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:32.379442   80401 system_pods.go:61] "storage-provisioner" [c9730cf9-c0f1-4afc-94cc-cbd825158d7c] Running
	I0717 18:45:32.379450   80401 system_pods.go:74] duration metric: took 182.412193ms to wait for pod list to return data ...
	I0717 18:45:32.379456   80401 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:45:32.576324   80401 default_sa.go:45] found service account: "default"
	I0717 18:45:32.576348   80401 default_sa.go:55] duration metric: took 196.886306ms for default service account to be created ...
	I0717 18:45:32.576357   80401 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:45:32.780237   80401 system_pods.go:86] 9 kube-system pods found
	I0717 18:45:32.780266   80401 system_pods.go:89] "coredns-5cfdc65f69-r9xns" [29624b73-848d-4a35-96bc-92f9627842fe] Running
	I0717 18:45:32.780272   80401 system_pods.go:89] "coredns-5cfdc65f69-tx7nc" [085ec394-1ca7-4b9b-9b54-b4fdab45bd75] Running
	I0717 18:45:32.780276   80401 system_pods.go:89] "etcd-no-preload-066175" [6086cbd0-137f-428e-8131-4d57b8823912] Running
	I0717 18:45:32.780280   80401 system_pods.go:89] "kube-apiserver-no-preload-066175" [c1913fea-3c1b-4563-ac80-ee1224b23a35] Running
	I0717 18:45:32.780284   80401 system_pods.go:89] "kube-controller-manager-no-preload-066175" [f6dd2ea0-be8f-4c8c-89b0-57fed0d618fd] Running
	I0717 18:45:32.780288   80401 system_pods.go:89] "kube-proxy-rgp5c" [7aaedb8f-b248-43ac-bd49-4f97d26aa1f6] Running
	I0717 18:45:32.780291   80401 system_pods.go:89] "kube-scheduler-no-preload-066175" [406fae53-d382-42c0-90db-ff9c57ccda8b] Running
	I0717 18:45:32.780298   80401 system_pods.go:89] "metrics-server-78fcd8795b-kj29z" [4b99bc9f-b5a7-4e86-b3ba-2607f9840957] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:32.780302   80401 system_pods.go:89] "storage-provisioner" [c9730cf9-c0f1-4afc-94cc-cbd825158d7c] Running
	I0717 18:45:32.780314   80401 system_pods.go:126] duration metric: took 203.948509ms to wait for k8s-apps to be running ...
	I0717 18:45:32.780323   80401 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:45:32.780368   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:32.796763   80401 system_svc.go:56] duration metric: took 16.430293ms WaitForService to wait for kubelet
	I0717 18:45:32.796791   80401 kubeadm.go:582] duration metric: took 13.286425468s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:45:32.796809   80401 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:45:32.977271   80401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:45:32.977295   80401 node_conditions.go:123] node cpu capacity is 2
	I0717 18:45:32.977305   80401 node_conditions.go:105] duration metric: took 180.491938ms to run NodePressure ...
	I0717 18:45:32.977315   80401 start.go:241] waiting for startup goroutines ...
	I0717 18:45:32.977322   80401 start.go:246] waiting for cluster config update ...
	I0717 18:45:32.977331   80401 start.go:255] writing updated cluster config ...
	I0717 18:45:32.977544   80401 ssh_runner.go:195] Run: rm -f paused
	I0717 18:45:33.022678   80401 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 18:45:33.024737   80401 out.go:177] * Done! kubectl is now configured to use "no-preload-066175" cluster and "default" namespace by default
	I0717 18:45:33.625503   81068 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.034773328s)
	I0717 18:45:33.625584   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:33.640151   81068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:33.650198   81068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:33.659027   81068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:33.659048   81068 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:33.659088   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 18:45:33.667607   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:33.667663   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:33.677632   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 18:45:33.685631   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:33.685683   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:33.694068   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 18:45:33.702840   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:33.702894   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:33.711560   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 18:45:33.719883   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:33.719928   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:33.729898   81068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:33.781672   81068 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:45:33.781776   81068 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:45:33.908046   81068 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:45:33.908199   81068 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:45:33.908366   81068 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:45:34.103926   81068 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:45:34.105872   81068 out.go:204]   - Generating certificates and keys ...
	I0717 18:45:34.105979   81068 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:45:34.106063   81068 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:45:34.106183   81068 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:45:34.106425   81068 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:45:34.106542   81068 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:45:34.106624   81068 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:45:34.106729   81068 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:45:34.106827   81068 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:45:34.106901   81068 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:45:34.106984   81068 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:45:34.107046   81068 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:45:34.107142   81068 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:45:34.390326   81068 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:45:34.442610   81068 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:45:34.692719   81068 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:45:34.777644   81068 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:45:35.101349   81068 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:45:35.102039   81068 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:45:35.104892   81068 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:45:35.106561   81068 out.go:204]   - Booting up control plane ...
	I0717 18:45:35.106689   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:45:35.106775   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:45:35.107611   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:45:35.126132   81068 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:45:35.127180   81068 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:45:35.127245   81068 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:45:35.250173   81068 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:45:35.250284   81068 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:45:35.752731   81068 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.583425ms
	I0717 18:45:35.752861   81068 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:45:40.754304   81068 kubeadm.go:310] [api-check] The API server is healthy after 5.001385597s
	I0717 18:45:40.766072   81068 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:45:40.785708   81068 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:45:40.816360   81068 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:45:40.816576   81068 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-022930 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:45:40.830588   81068 kubeadm.go:310] [bootstrap-token] Using token: kxmxsp.4wnt2q9oqhdfdirj
	I0717 18:45:40.831905   81068 out.go:204]   - Configuring RBAC rules ...
	I0717 18:45:40.832031   81068 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:45:40.840754   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:45:40.850104   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:45:40.853748   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:45:40.857341   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:45:40.860783   81068 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:45:41.161978   81068 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:45:41.600410   81068 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:45:42.161763   81068 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:45:42.163450   81068 kubeadm.go:310] 
	I0717 18:45:42.163541   81068 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:45:42.163558   81068 kubeadm.go:310] 
	I0717 18:45:42.163661   81068 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:45:42.163673   81068 kubeadm.go:310] 
	I0717 18:45:42.163707   81068 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:45:42.163797   81068 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:45:42.163870   81068 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:45:42.163881   81068 kubeadm.go:310] 
	I0717 18:45:42.163974   81068 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:45:42.163990   81068 kubeadm.go:310] 
	I0717 18:45:42.164058   81068 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:45:42.164077   81068 kubeadm.go:310] 
	I0717 18:45:42.164151   81068 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:45:42.164256   81068 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:45:42.164367   81068 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:45:42.164377   81068 kubeadm.go:310] 
	I0717 18:45:42.164489   81068 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:45:42.164588   81068 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:45:42.164595   81068 kubeadm.go:310] 
	I0717 18:45:42.164683   81068 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token kxmxsp.4wnt2q9oqhdfdirj \
	I0717 18:45:42.164826   81068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:45:42.164862   81068 kubeadm.go:310] 	--control-plane 
	I0717 18:45:42.164870   81068 kubeadm.go:310] 
	I0717 18:45:42.165002   81068 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:45:42.165012   81068 kubeadm.go:310] 
	I0717 18:45:42.165143   81068 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token kxmxsp.4wnt2q9oqhdfdirj \
	I0717 18:45:42.165257   81068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:45:42.166381   81068 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:42.166436   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:45:42.166456   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:45:42.168387   81068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:45:42.169678   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:45:42.180065   81068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:45:42.197116   81068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:45:42.197192   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:42.197217   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-022930 minikube.k8s.io/updated_at=2024_07_17T18_45_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=default-k8s-diff-port-022930 minikube.k8s.io/primary=true
	I0717 18:45:42.216456   81068 ops.go:34] apiserver oom_adj: -16
	I0717 18:45:42.370148   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:42.870732   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:43.370980   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:43.871201   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:44.370616   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:44.871007   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:45.370377   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:45.870614   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:46.370555   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:46.870513   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:47.370594   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:47.870651   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:48.370620   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:48.870863   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:49.371058   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:49.870188   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:50.370949   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:50.871187   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:51.370764   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:51.871007   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:52.370298   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:52.870917   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:53.371193   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:53.870491   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:54.370274   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:54.871160   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.370879   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.870592   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.948131   81068 kubeadm.go:1113] duration metric: took 13.751000929s to wait for elevateKubeSystemPrivileges
	I0717 18:45:55.948166   81068 kubeadm.go:394] duration metric: took 5m11.453950834s to StartCluster
	I0717 18:45:55.948188   81068 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:55.948265   81068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:45:55.950777   81068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:55.951066   81068 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:45:55.951134   81068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:45:55.951202   81068 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951237   81068 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.951247   81068 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:45:55.951243   81068 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951257   81068 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951293   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:45:55.951300   81068 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.951318   81068 addons.go:243] addon metrics-server should already be in state true
	I0717 18:45:55.951319   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.951348   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.951292   81068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-022930"
	I0717 18:45:55.951712   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951732   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951744   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951754   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.951769   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.951747   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.952885   81068 out.go:177] * Verifying Kubernetes components...
	I0717 18:45:55.954423   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:45:55.968158   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0717 18:45:55.968547   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41199
	I0717 18:45:55.968768   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.968917   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.969414   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.969436   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.969548   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.969566   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.969814   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.970012   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.970235   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:55.970413   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.970462   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.970809   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44281
	I0717 18:45:55.971165   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.974130   81068 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.974155   81068 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:45:55.974184   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.974549   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.974578   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.981608   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.981640   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.982054   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.982711   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.982754   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.990665   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0717 18:45:55.991297   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.991922   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.991938   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.992213   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.992346   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:55.993952   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:55.996135   81068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:45:55.997555   81068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:55.997579   81068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:45:55.997602   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:55.998414   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0717 18:45:55.998963   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.999540   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.999554   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.000799   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I0717 18:45:56.001014   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.001096   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.001419   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:56.001512   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.001527   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.001755   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.001929   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.002102   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.002141   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:56.002178   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:56.002255   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.002686   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:56.002709   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.003047   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.003251   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:56.004660   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:56.006355   81068 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:45:56.007646   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:45:56.007663   81068 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:45:56.007678   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:56.010711   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.011169   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.011220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.011452   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.011637   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.011806   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.011932   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.021277   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0717 18:45:56.021980   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:56.022568   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:56.022585   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.022949   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.023127   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:56.025023   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:56.025443   81068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:56.025458   81068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:45:56.025476   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:56.028095   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.028450   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.028477   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.028666   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.028853   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.029081   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.029226   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.173482   81068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:45:56.194585   81068 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-022930" to be "Ready" ...
	I0717 18:45:56.203594   81068 node_ready.go:49] node "default-k8s-diff-port-022930" has status "Ready":"True"
	I0717 18:45:56.203614   81068 node_ready.go:38] duration metric: took 8.994875ms for node "default-k8s-diff-port-022930" to be "Ready" ...
	I0717 18:45:56.203623   81068 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:56.207834   81068 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.212424   81068 pod_ready.go:92] pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.212444   81068 pod_ready.go:81] duration metric: took 4.58857ms for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.212454   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.217013   81068 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.217031   81068 pod_ready.go:81] duration metric: took 4.569971ms for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.217040   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.221441   81068 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.221458   81068 pod_ready.go:81] duration metric: took 4.411121ms for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.221470   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hnb5v" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.268740   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:45:56.268765   81068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:45:56.290194   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:56.310957   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:45:56.310981   81068 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:45:56.352789   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:56.352821   81068 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:45:56.378402   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:56.379632   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:56.518737   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.518766   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.519075   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.519097   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:56.519108   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.519117   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.519340   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.519352   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.519383   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.519426   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:56.529290   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.529317   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.529618   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.529680   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.529697   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.386401   81068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.007961919s)
	I0717 18:45:57.386463   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.386480   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.386925   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.386980   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.386999   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.387017   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.386958   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.387283   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.387304   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731240   81068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.351571451s)
	I0717 18:45:57.731287   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.731300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.731616   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.731650   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.731664   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731672   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.731685   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.731905   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.731930   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731949   81068 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-022930"
	I0717 18:45:57.731960   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.734601   81068 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 18:45:53.693038   80180 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.028164403s)
	I0717 18:45:53.693099   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:53.709020   80180 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:53.718790   80180 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:53.728384   80180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:53.728405   80180 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:53.728444   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:45:53.737315   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:53.737384   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:53.746336   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:45:53.754297   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:53.754347   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:53.763252   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:45:53.772186   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:53.772229   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:53.780829   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:45:53.788899   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:53.788955   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:53.797324   80180 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:53.982580   80180 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:57.735769   81068 addons.go:510] duration metric: took 1.784634456s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 18:45:57.742312   81068 pod_ready.go:92] pod "kube-proxy-hnb5v" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:57.742333   81068 pod_ready.go:81] duration metric: took 1.520854667s for pod "kube-proxy-hnb5v" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.742344   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.809858   81068 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:57.809885   81068 pod_ready.go:81] duration metric: took 67.527182ms for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.809896   81068 pod_ready.go:38] duration metric: took 1.606263576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:57.809914   81068 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:45:57.809972   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:45:57.847337   81068 api_server.go:72] duration metric: took 1.896234247s to wait for apiserver process to appear ...
	I0717 18:45:57.847366   81068 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:45:57.847391   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:45:57.853537   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 200:
	ok
	I0717 18:45:57.856587   81068 api_server.go:141] control plane version: v1.30.2
	I0717 18:45:57.856661   81068 api_server.go:131] duration metric: took 9.286402ms to wait for apiserver health ...
	I0717 18:45:57.856684   81068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:45:58.002336   81068 system_pods.go:59] 9 kube-system pods found
	I0717 18:45:58.002374   81068 system_pods.go:61] "coredns-7db6d8ff4d-fp4tg" [dc66092c-9183-4630-93cc-6ec4aa59a928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.002383   81068 system_pods.go:61] "coredns-7db6d8ff4d-jn64r" [35cbef26-555a-4693-afac-c739d9238a04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.002396   81068 system_pods.go:61] "etcd-default-k8s-diff-port-022930" [f83fd844-0ede-4638-b8c6-2ecdecbf4345] Running
	I0717 18:45:58.002402   81068 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-022930" [19fa3a0a-ab56-4163-b39f-2b12ce65d490] Running
	I0717 18:45:58.002408   81068 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-022930" [0037b401-ce9b-41f3-89de-47608a46a228] Running
	I0717 18:45:58.002414   81068 system_pods.go:61] "kube-proxy-hnb5v" [b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d] Running
	I0717 18:45:58.002418   81068 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-022930" [21fa54d0-9d90-492c-b90c-e5070dd2e350] Running
	I0717 18:45:58.002425   81068 system_pods.go:61] "metrics-server-569cc877fc-pfmwt" [39616dfc-215e-4af5-90f7-12fc28304494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:58.002435   81068 system_pods.go:61] "storage-provisioner" [d9b11611-2008-4a15-a661-62809bd1d4c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:58.002452   81068 system_pods.go:74] duration metric: took 145.752129ms to wait for pod list to return data ...
	I0717 18:45:58.002463   81068 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:45:58.197223   81068 default_sa.go:45] found service account: "default"
	I0717 18:45:58.197250   81068 default_sa.go:55] duration metric: took 194.774408ms for default service account to be created ...
	I0717 18:45:58.197260   81068 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:45:58.401825   81068 system_pods.go:86] 9 kube-system pods found
	I0717 18:45:58.401878   81068 system_pods.go:89] "coredns-7db6d8ff4d-fp4tg" [dc66092c-9183-4630-93cc-6ec4aa59a928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.401891   81068 system_pods.go:89] "coredns-7db6d8ff4d-jn64r" [35cbef26-555a-4693-afac-c739d9238a04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.401904   81068 system_pods.go:89] "etcd-default-k8s-diff-port-022930" [f83fd844-0ede-4638-b8c6-2ecdecbf4345] Running
	I0717 18:45:58.401917   81068 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-022930" [19fa3a0a-ab56-4163-b39f-2b12ce65d490] Running
	I0717 18:45:58.401927   81068 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-022930" [0037b401-ce9b-41f3-89de-47608a46a228] Running
	I0717 18:45:58.401935   81068 system_pods.go:89] "kube-proxy-hnb5v" [b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d] Running
	I0717 18:45:58.401940   81068 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-022930" [21fa54d0-9d90-492c-b90c-e5070dd2e350] Running
	I0717 18:45:58.401948   81068 system_pods.go:89] "metrics-server-569cc877fc-pfmwt" [39616dfc-215e-4af5-90f7-12fc28304494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:58.401956   81068 system_pods.go:89] "storage-provisioner" [d9b11611-2008-4a15-a661-62809bd1d4c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:58.401965   81068 system_pods.go:126] duration metric: took 204.700297ms to wait for k8s-apps to be running ...
	I0717 18:45:58.401975   81068 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:45:58.402024   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:58.416020   81068 system_svc.go:56] duration metric: took 14.023536ms WaitForService to wait for kubelet
	I0717 18:45:58.416056   81068 kubeadm.go:582] duration metric: took 2.464957357s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:45:58.416079   81068 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:45:58.598829   81068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:45:58.598863   81068 node_conditions.go:123] node cpu capacity is 2
	I0717 18:45:58.598876   81068 node_conditions.go:105] duration metric: took 182.791383ms to run NodePressure ...
	I0717 18:45:58.598891   81068 start.go:241] waiting for startup goroutines ...
	I0717 18:45:58.598899   81068 start.go:246] waiting for cluster config update ...
	I0717 18:45:58.598912   81068 start.go:255] writing updated cluster config ...
	I0717 18:45:58.599267   81068 ssh_runner.go:195] Run: rm -f paused
	I0717 18:45:58.661380   81068 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:45:58.663085   81068 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-022930" cluster and "default" namespace by default
	I0717 18:46:02.558673   80180 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:46:02.558766   80180 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:02.558842   80180 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:02.558980   80180 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:02.559118   80180 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:02.559210   80180 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:02.561934   80180 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:02.562036   80180 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:02.562108   80180 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:02.562191   80180 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:02.562290   80180 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:02.562393   80180 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:02.562478   80180 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:02.562565   80180 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:02.562643   80180 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:02.562711   80180 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:02.562826   80180 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:02.562886   80180 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:02.562958   80180 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:02.563005   80180 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:02.563081   80180 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:46:02.563136   80180 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:02.563210   80180 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:02.563293   80180 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:02.563405   80180 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:02.563468   80180 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:02.564989   80180 out.go:204]   - Booting up control plane ...
	I0717 18:46:02.565092   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:02.565181   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:02.565270   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:02.565400   80180 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:02.565526   80180 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:02.565597   80180 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:02.565783   80180 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:46:02.565880   80180 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:46:02.565959   80180 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.323304ms
	I0717 18:46:02.566046   80180 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:46:02.566105   80180 kubeadm.go:310] [api-check] The API server is healthy after 5.002038309s
	I0717 18:46:02.566206   80180 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:46:02.566307   80180 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:46:02.566359   80180 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:46:02.566525   80180 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-527415 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:46:02.566575   80180 kubeadm.go:310] [bootstrap-token] Using token: xeax16.7z40teb0jswemrgg
	I0717 18:46:02.568038   80180 out.go:204]   - Configuring RBAC rules ...
	I0717 18:46:02.568120   80180 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:46:02.568194   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:46:02.568314   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:46:02.568449   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:46:02.568553   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:46:02.568660   80180 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:46:02.568807   80180 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:46:02.568877   80180 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:46:02.568926   80180 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:46:02.568936   80180 kubeadm.go:310] 
	I0717 18:46:02.569032   80180 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:46:02.569044   80180 kubeadm.go:310] 
	I0717 18:46:02.569108   80180 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:46:02.569114   80180 kubeadm.go:310] 
	I0717 18:46:02.569157   80180 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:46:02.569249   80180 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:46:02.569326   80180 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:46:02.569346   80180 kubeadm.go:310] 
	I0717 18:46:02.569432   80180 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:46:02.569442   80180 kubeadm.go:310] 
	I0717 18:46:02.569511   80180 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:46:02.569519   80180 kubeadm.go:310] 
	I0717 18:46:02.569599   80180 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:46:02.569695   80180 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:46:02.569790   80180 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:46:02.569797   80180 kubeadm.go:310] 
	I0717 18:46:02.569905   80180 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:46:02.569985   80180 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:46:02.569998   80180 kubeadm.go:310] 
	I0717 18:46:02.570096   80180 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xeax16.7z40teb0jswemrgg \
	I0717 18:46:02.570234   80180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:46:02.570264   80180 kubeadm.go:310] 	--control-plane 
	I0717 18:46:02.570273   80180 kubeadm.go:310] 
	I0717 18:46:02.570348   80180 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:46:02.570355   80180 kubeadm.go:310] 
	I0717 18:46:02.570429   80180 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xeax16.7z40teb0jswemrgg \
	I0717 18:46:02.570555   80180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:46:02.570569   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:46:02.570578   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:46:02.571934   80180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:46:02.573034   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:46:02.583253   80180 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:46:02.603658   80180 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:46:02.603745   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-527415 minikube.k8s.io/updated_at=2024_07_17T18_46_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=embed-certs-527415 minikube.k8s.io/primary=true
	I0717 18:46:02.603745   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:02.621414   80180 ops.go:34] apiserver oom_adj: -16
	I0717 18:46:02.792226   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:03.292632   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:03.792270   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:04.293220   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:04.793011   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:05.292596   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:05.793043   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:06.293286   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:06.793069   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:07.292569   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:07.792604   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:08.293028   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:08.792259   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:09.292273   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:09.792672   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.293080   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.792442   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.292894   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.792436   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.292411   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.792327   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.292909   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.792878   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.293188   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.793038   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.292453   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.792367   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.898487   80180 kubeadm.go:1113] duration metric: took 13.294815165s to wait for elevateKubeSystemPrivileges
	I0717 18:46:15.898528   80180 kubeadm.go:394] duration metric: took 5m13.234208822s to StartCluster
	I0717 18:46:15.898546   80180 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:15.898626   80180 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:46:15.900239   80180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:15.900462   80180 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:46:15.900564   80180 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:46:15.900648   80180 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:46:15.900655   80180 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-527415"
	I0717 18:46:15.900667   80180 addons.go:69] Setting default-storageclass=true in profile "embed-certs-527415"
	I0717 18:46:15.900691   80180 addons.go:69] Setting metrics-server=true in profile "embed-certs-527415"
	I0717 18:46:15.900704   80180 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-527415"
	I0717 18:46:15.900709   80180 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-527415"
	I0717 18:46:15.900714   80180 addons.go:234] Setting addon metrics-server=true in "embed-certs-527415"
	W0717 18:46:15.900747   80180 addons.go:243] addon metrics-server should already be in state true
	I0717 18:46:15.900777   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	W0717 18:46:15.900715   80180 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:46:15.900852   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:46:15.901106   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901150   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.901152   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901183   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.901264   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901298   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.902177   80180 out.go:177] * Verifying Kubernetes components...
	I0717 18:46:15.903698   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:46:15.918294   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40333
	I0717 18:46:15.918295   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I0717 18:46:15.918859   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.918909   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.919433   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.919455   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.919478   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40379
	I0717 18:46:15.919548   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.919572   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.919788   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.919875   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.919883   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.920316   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.920323   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.920338   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.920345   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.920387   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.920425   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.920695   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.920890   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.924623   80180 addons.go:234] Setting addon default-storageclass=true in "embed-certs-527415"
	W0717 18:46:15.924644   80180 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:46:15.924672   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:46:15.925801   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.925830   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.936020   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
	I0717 18:46:15.936280   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0717 18:46:15.936365   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.936674   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.937144   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.937164   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.937229   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.937239   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.937565   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.937587   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.937770   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.937872   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.939671   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.939856   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.941929   80180 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:46:15.941934   80180 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:46:15.943632   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:46:15.943650   80180 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:46:15.943668   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.943715   80180 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:15.943724   80180 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:46:15.943737   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.946283   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33675
	I0717 18:46:15.946815   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.947230   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.947240   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.947272   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.947953   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.947987   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948001   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.948179   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.948223   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948248   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.948388   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.948604   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:15.948627   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.948653   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948832   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.948870   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.948895   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.949086   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.949307   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.949454   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:15.969385   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0717 18:46:15.969789   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.970221   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.970241   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.970756   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.970963   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.972631   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.972849   80180 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:15.972868   80180 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:46:15.972889   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.975680   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.976123   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.976187   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.976320   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.976496   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.976657   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.976748   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:16.134605   80180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:46:16.206139   80180 node_ready.go:35] waiting up to 6m0s for node "embed-certs-527415" to be "Ready" ...
	I0717 18:46:16.214532   80180 node_ready.go:49] node "embed-certs-527415" has status "Ready":"True"
	I0717 18:46:16.214550   80180 node_ready.go:38] duration metric: took 8.382109ms for node "embed-certs-527415" to be "Ready" ...
	I0717 18:46:16.214568   80180 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:46:16.223573   80180 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:16.254146   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:46:16.254166   80180 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:46:16.293257   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:16.312304   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:16.334927   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:46:16.334949   80180 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:46:16.404696   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:16.404723   80180 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:46:16.462835   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:17.281062   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281088   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281062   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281157   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281395   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281402   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281415   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281415   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281424   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281427   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281432   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281436   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281676   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.281678   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281700   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281705   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.281722   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281732   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.300264   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.300294   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.300592   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.300643   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.300672   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.489477   80180 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026593042s)
	I0717 18:46:17.489520   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.489534   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.490020   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.490047   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.490055   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.490068   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.490077   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.490344   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.490373   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.490384   80180 addons.go:475] Verifying addon metrics-server=true in "embed-certs-527415"
	I0717 18:46:17.490397   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.492257   80180 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 18:46:17.493487   80180 addons.go:510] duration metric: took 1.592928152s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 18:46:18.230569   80180 pod_ready.go:92] pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.230592   80180 pod_ready.go:81] duration metric: took 2.006995421s for pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.230603   80180 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.235298   80180 pod_ready.go:92] pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.235317   80180 pod_ready.go:81] duration metric: took 4.707534ms for pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.235327   80180 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.238998   80180 pod_ready.go:92] pod "etcd-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.239015   80180 pod_ready.go:81] duration metric: took 3.681191ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.239023   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.242949   80180 pod_ready.go:92] pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.242967   80180 pod_ready.go:81] duration metric: took 3.937614ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.242977   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.246567   80180 pod_ready.go:92] pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.246580   80180 pod_ready.go:81] duration metric: took 3.597434ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.246588   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m52fq" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.628607   80180 pod_ready.go:92] pod "kube-proxy-m52fq" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.628636   80180 pod_ready.go:81] duration metric: took 382.042151ms for pod "kube-proxy-m52fq" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.628650   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:19.028536   80180 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:19.028558   80180 pod_ready.go:81] duration metric: took 399.900565ms for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:19.028565   80180 pod_ready.go:38] duration metric: took 2.813989212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:46:19.028578   80180 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:46:19.028630   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:46:19.044787   80180 api_server.go:72] duration metric: took 3.144295616s to wait for apiserver process to appear ...
	I0717 18:46:19.044810   80180 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:46:19.044825   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:46:19.051106   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:46:19.052094   80180 api_server.go:141] control plane version: v1.30.2
	I0717 18:46:19.052111   80180 api_server.go:131] duration metric: took 7.296406ms to wait for apiserver health ...
	I0717 18:46:19.052117   80180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:46:19.231877   80180 system_pods.go:59] 9 kube-system pods found
	I0717 18:46:19.231905   80180 system_pods.go:61] "coredns-7db6d8ff4d-2zt8k" [5e2e90bb-5721-4ca8-8177-77e6b686175a] Running
	I0717 18:46:19.231912   80180 system_pods.go:61] "coredns-7db6d8ff4d-f64kh" [f0de6ef4-1402-44b2-81f3-3f234a72d151] Running
	I0717 18:46:19.231916   80180 system_pods.go:61] "etcd-embed-certs-527415" [79d210fe-c4d9-476f-ab78-cce3b98c1c95] Running
	I0717 18:46:19.231921   80180 system_pods.go:61] "kube-apiserver-embed-certs-527415" [8b43654e-7127-4e43-91e6-1239bf66661d] Running
	I0717 18:46:19.231925   80180 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [55da9f4c-566b-4f82-a700-236d117bd9a4] Running
	I0717 18:46:19.231929   80180 system_pods.go:61] "kube-proxy-m52fq" [40f99883-b343-43b3-8f94-4b45b379a17b] Running
	I0717 18:46:19.231934   80180 system_pods.go:61] "kube-scheduler-embed-certs-527415" [e6031b0b-5aa6-4827-b41a-a422d05c0b9a] Running
	I0717 18:46:19.231942   80180 system_pods.go:61] "metrics-server-569cc877fc-hvxtg" [05a18f70-4284-4315-892e-2850ac8b5050] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:46:19.231947   80180 system_pods.go:61] "storage-provisioner" [5f473bbe-0727-4f25-ba39-4ed322767465] Running
	I0717 18:46:19.231957   80180 system_pods.go:74] duration metric: took 179.833729ms to wait for pod list to return data ...
	I0717 18:46:19.231966   80180 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:46:19.427972   80180 default_sa.go:45] found service account: "default"
	I0717 18:46:19.427994   80180 default_sa.go:55] duration metric: took 196.021611ms for default service account to be created ...
	I0717 18:46:19.428002   80180 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:46:19.630730   80180 system_pods.go:86] 9 kube-system pods found
	I0717 18:46:19.630755   80180 system_pods.go:89] "coredns-7db6d8ff4d-2zt8k" [5e2e90bb-5721-4ca8-8177-77e6b686175a] Running
	I0717 18:46:19.630760   80180 system_pods.go:89] "coredns-7db6d8ff4d-f64kh" [f0de6ef4-1402-44b2-81f3-3f234a72d151] Running
	I0717 18:46:19.630765   80180 system_pods.go:89] "etcd-embed-certs-527415" [79d210fe-c4d9-476f-ab78-cce3b98c1c95] Running
	I0717 18:46:19.630769   80180 system_pods.go:89] "kube-apiserver-embed-certs-527415" [8b43654e-7127-4e43-91e6-1239bf66661d] Running
	I0717 18:46:19.630774   80180 system_pods.go:89] "kube-controller-manager-embed-certs-527415" [55da9f4c-566b-4f82-a700-236d117bd9a4] Running
	I0717 18:46:19.630778   80180 system_pods.go:89] "kube-proxy-m52fq" [40f99883-b343-43b3-8f94-4b45b379a17b] Running
	I0717 18:46:19.630782   80180 system_pods.go:89] "kube-scheduler-embed-certs-527415" [e6031b0b-5aa6-4827-b41a-a422d05c0b9a] Running
	I0717 18:46:19.630788   80180 system_pods.go:89] "metrics-server-569cc877fc-hvxtg" [05a18f70-4284-4315-892e-2850ac8b5050] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:46:19.630792   80180 system_pods.go:89] "storage-provisioner" [5f473bbe-0727-4f25-ba39-4ed322767465] Running
	I0717 18:46:19.630800   80180 system_pods.go:126] duration metric: took 202.793522ms to wait for k8s-apps to be running ...
	I0717 18:46:19.630806   80180 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:46:19.630849   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:19.646111   80180 system_svc.go:56] duration metric: took 15.296964ms WaitForService to wait for kubelet
	I0717 18:46:19.646133   80180 kubeadm.go:582] duration metric: took 3.745647205s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:46:19.646149   80180 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:46:19.828333   80180 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:46:19.828356   80180 node_conditions.go:123] node cpu capacity is 2
	I0717 18:46:19.828368   80180 node_conditions.go:105] duration metric: took 182.213813ms to run NodePressure ...
	I0717 18:46:19.828381   80180 start.go:241] waiting for startup goroutines ...
	I0717 18:46:19.828389   80180 start.go:246] waiting for cluster config update ...
	I0717 18:46:19.828401   80180 start.go:255] writing updated cluster config ...
	I0717 18:46:19.828690   80180 ssh_runner.go:195] Run: rm -f paused
	I0717 18:46:19.877774   80180 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:46:19.879769   80180 out.go:177] * Done! kubectl is now configured to use "embed-certs-527415" cluster and "default" namespace by default
	I0717 18:46:33.124646   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:46:33.124790   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:46:33.126245   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.126307   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.126409   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.126547   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.126673   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:33.126734   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:33.128541   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:33.128626   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:33.128707   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:33.128817   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:33.128901   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:33.129018   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:33.129091   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:33.129172   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:33.129249   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:33.129339   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:33.129408   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:33.129444   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:33.129532   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:33.129603   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:33.129665   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:33.129765   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:33.129812   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:33.129929   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:33.130037   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:33.130093   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:33.130177   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:33.131546   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:33.131652   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:33.131750   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:33.131858   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:33.131939   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:33.132085   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:46:33.132133   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:46:33.132189   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132355   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132419   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132585   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132657   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132839   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132900   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133143   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133248   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133452   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133460   80857 kubeadm.go:310] 
	I0717 18:46:33.133494   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:46:33.133529   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:46:33.133535   80857 kubeadm.go:310] 
	I0717 18:46:33.133564   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:46:33.133599   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:46:33.133727   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:46:33.133752   80857 kubeadm.go:310] 
	I0717 18:46:33.133905   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:46:33.133947   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:46:33.134002   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:46:33.134012   80857 kubeadm.go:310] 
	I0717 18:46:33.134116   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:46:33.134186   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:46:33.134193   80857 kubeadm.go:310] 
	I0717 18:46:33.134290   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:46:33.134367   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:46:33.134431   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:46:33.134491   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:46:33.134533   80857 kubeadm.go:310] 
	W0717 18:46:33.134615   80857 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 18:46:33.134669   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:46:33.590879   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:33.605393   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:46:33.614382   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:46:33.614405   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:46:33.614450   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:46:33.622849   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:46:33.622905   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:46:33.631852   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:46:33.640160   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:46:33.640211   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:46:33.648774   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.656740   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:46:33.656796   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.665799   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:46:33.674492   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:46:33.674547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:46:33.683627   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:46:33.746405   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.746472   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.881152   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.881297   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.881443   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:34.053199   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:34.055757   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:34.055843   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:34.055918   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:34.056030   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:34.056129   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:34.056232   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:34.056336   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:34.056431   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:34.056524   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:34.056656   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:34.056764   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:34.056824   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:34.056900   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:34.276456   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:34.491418   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:34.702265   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:34.874511   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:34.895484   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:34.896451   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:34.896536   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:35.040208   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:35.042291   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:35.042437   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:35.042565   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:35.044391   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:35.046206   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:35.050843   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:47:15.053070   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:47:15.053416   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:15.053586   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:20.053963   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:20.054207   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:30.054801   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:30.055011   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:50.055270   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:50.055465   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.053919   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:48:30.054133   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.054148   80857 kubeadm.go:310] 
	I0717 18:48:30.054231   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:48:30.054300   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:48:30.054326   80857 kubeadm.go:310] 
	I0717 18:48:30.054386   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:48:30.054443   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:48:30.054581   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:48:30.054593   80857 kubeadm.go:310] 
	I0717 18:48:30.054715   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:48:30.054761   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:48:30.054810   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:48:30.054818   80857 kubeadm.go:310] 
	I0717 18:48:30.054970   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:48:30.055069   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:48:30.055081   80857 kubeadm.go:310] 
	I0717 18:48:30.055236   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:48:30.055332   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:48:30.055396   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:48:30.055457   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:48:30.055483   80857 kubeadm.go:310] 
	I0717 18:48:30.056139   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:48:30.056246   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:48:30.056338   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:48:30.056413   80857 kubeadm.go:394] duration metric: took 8m2.908780359s to StartCluster
	I0717 18:48:30.056461   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:48:30.056524   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:48:30.102640   80857 cri.go:89] found id: ""
	I0717 18:48:30.102662   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.102669   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:48:30.102674   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:48:30.102724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:48:30.142516   80857 cri.go:89] found id: ""
	I0717 18:48:30.142548   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.142559   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:48:30.142567   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:48:30.142630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:48:30.178558   80857 cri.go:89] found id: ""
	I0717 18:48:30.178589   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.178598   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:48:30.178604   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:48:30.178677   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:48:30.211146   80857 cri.go:89] found id: ""
	I0717 18:48:30.211177   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.211186   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:48:30.211192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:48:30.211242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:48:30.244287   80857 cri.go:89] found id: ""
	I0717 18:48:30.244308   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.244314   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:48:30.244319   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:48:30.244364   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:48:30.274547   80857 cri.go:89] found id: ""
	I0717 18:48:30.274577   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.274587   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:48:30.274594   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:48:30.274660   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:48:30.306796   80857 cri.go:89] found id: ""
	I0717 18:48:30.306825   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.306835   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:48:30.306842   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:48:30.306903   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:48:30.341938   80857 cri.go:89] found id: ""
	I0717 18:48:30.341962   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.341972   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:48:30.341982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:48:30.341997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:48:30.407881   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:48:30.407925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:48:30.430885   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:48:30.430913   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:48:30.525366   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:48:30.525394   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:48:30.525408   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:48:30.639556   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:48:30.639588   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 18:48:30.677493   80857 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 18:48:30.677544   80857 out.go:239] * 
	W0717 18:48:30.677604   80857 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.677636   80857 out.go:239] * 
	W0717 18:48:30.678483   80857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:48:30.681792   80857 out.go:177] 
	W0717 18:48:30.682976   80857 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.683034   80857 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 18:48:30.683050   80857 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 18:48:30.684325   80857 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.641369907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242500641347836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eca3eaff-0a0e-45ed-ae27-e3d0bf8ad57d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.641885589Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34491e69-c80f-4374-b3ac-bc379637a93b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.641934615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34491e69-c80f-4374-b3ac-bc379637a93b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.642146981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:218a44cd8585fbb83856c49696567afd594b4da967ac5ce50a0f632e2a6138cf,PodSandboxId:fb12b6b348e3e8568d69a1524584087652bbf96f2a5c845f8fda2ab30e641139,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241957906796398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9b11611-2008-4a15-a661-62809bd1d4c3,},Annotations:map[string]string{io.kubernetes.container.hash: a189e809,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5119186a70a760fe0c9b05022c775aaabe1a15791e247d7e841827098d306094,PodSandboxId:7d313062ed4075c4bf53961edb6b650038f88792ea8bcc9f3937e4a98ba438b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957428999810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn64r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35cbef26-555a-4693-afac-c739d9238a04,},Annotations:map[string]string{io.kubernetes.container.hash: 415218df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53dbf27ee711bde074b1abeee9bda1c0d830a983bf1acba2b6c8dfce83506a1,PodSandboxId:3407801315db4c603819b6fbd1e8c488045e35c148565ec53bbe65a53f31e252,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957235378366,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fp4tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: dc66092c-9183-4630-93cc-6ec4aa59a928,},Annotations:map[string]string{io.kubernetes.container.hash: 5b65e69b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d0f9b94a63b54376b5b3829bce9836e163d81fa80f24bf00e7f22b57d1a7a,PodSandboxId:455fc8fef39b80fd07b2de059ed7d5455df22677ec7846946cb948b87cbf9023,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721241956624394448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hnb5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 80fc2e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebbb2d90c0739141688761958d1119db0b157d52ffc853e1617aae7b4bf391,PodSandboxId:aabc1991466408493cace4e1341882e1ba856c5c65e55c8fb572ee9a32e8e302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172124193613116032
5,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b95a014b1974e2af4c29b922c88ba23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446730942de93d8fa246bfeb34d266f7bf40a70f2053eb3e9ac31212deff821,PodSandboxId:84fe57441a688f0d08a97f67b75df506036728b8fc5ada6ca6c0e0dbeec677ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:172124193614
5161953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f45fb335c5e2df14c04532f6497e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef9a4c788e9faf3a71500cb6e6711f5724fd07dbb7913c27ce756e69d8f30428,PodSandboxId:037fefa47cc3e2e9904b65a373b2dd771ffd70af156e34a05516c8f22a809237,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172124
1936105803776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b881b6fb22297dfca21c86875467d3,},Annotations:map[string]string{io.kubernetes.container.hash: 83137d99,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b9c11d9cadb0acdcc1067e825e408e9b1254ab6fea64f318e165d96850aa,PodSandboxId:66ba99c8af289e788e9aa97aa463bbeec09c98cbc44cc6fd685aff9ece2cc687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241936071541681,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9381e247719c18d6691e17ec6054a636be76ac6e3cda059f343170a5021edac6,PodSandboxId:e7f0782e6d6c684dbec94e6a3219bf7a955c607c4980918f26af71b26860402a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241647657505331,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34491e69-c80f-4374-b3ac-bc379637a93b name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.677018801Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5aca05e-59ad-4a30-96dc-a4165b6f475d name=/runtime.v1.RuntimeService/Version
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.677123829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5aca05e-59ad-4a30-96dc-a4165b6f475d name=/runtime.v1.RuntimeService/Version
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.678149604Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd186e01-7b71-42e2-aaa2-a8fcbb5053ff name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.679001349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242500678677336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd186e01-7b71-42e2-aaa2-a8fcbb5053ff name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.679570993Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=875bad6a-bc1d-4b91-8f66-b9bf495c1561 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.679637434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=875bad6a-bc1d-4b91-8f66-b9bf495c1561 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.679958524Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:218a44cd8585fbb83856c49696567afd594b4da967ac5ce50a0f632e2a6138cf,PodSandboxId:fb12b6b348e3e8568d69a1524584087652bbf96f2a5c845f8fda2ab30e641139,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241957906796398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9b11611-2008-4a15-a661-62809bd1d4c3,},Annotations:map[string]string{io.kubernetes.container.hash: a189e809,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5119186a70a760fe0c9b05022c775aaabe1a15791e247d7e841827098d306094,PodSandboxId:7d313062ed4075c4bf53961edb6b650038f88792ea8bcc9f3937e4a98ba438b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957428999810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn64r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35cbef26-555a-4693-afac-c739d9238a04,},Annotations:map[string]string{io.kubernetes.container.hash: 415218df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53dbf27ee711bde074b1abeee9bda1c0d830a983bf1acba2b6c8dfce83506a1,PodSandboxId:3407801315db4c603819b6fbd1e8c488045e35c148565ec53bbe65a53f31e252,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957235378366,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fp4tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: dc66092c-9183-4630-93cc-6ec4aa59a928,},Annotations:map[string]string{io.kubernetes.container.hash: 5b65e69b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d0f9b94a63b54376b5b3829bce9836e163d81fa80f24bf00e7f22b57d1a7a,PodSandboxId:455fc8fef39b80fd07b2de059ed7d5455df22677ec7846946cb948b87cbf9023,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721241956624394448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hnb5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 80fc2e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebbb2d90c0739141688761958d1119db0b157d52ffc853e1617aae7b4bf391,PodSandboxId:aabc1991466408493cace4e1341882e1ba856c5c65e55c8fb572ee9a32e8e302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172124193613116032
5,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b95a014b1974e2af4c29b922c88ba23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446730942de93d8fa246bfeb34d266f7bf40a70f2053eb3e9ac31212deff821,PodSandboxId:84fe57441a688f0d08a97f67b75df506036728b8fc5ada6ca6c0e0dbeec677ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:172124193614
5161953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f45fb335c5e2df14c04532f6497e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef9a4c788e9faf3a71500cb6e6711f5724fd07dbb7913c27ce756e69d8f30428,PodSandboxId:037fefa47cc3e2e9904b65a373b2dd771ffd70af156e34a05516c8f22a809237,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172124
1936105803776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b881b6fb22297dfca21c86875467d3,},Annotations:map[string]string{io.kubernetes.container.hash: 83137d99,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b9c11d9cadb0acdcc1067e825e408e9b1254ab6fea64f318e165d96850aa,PodSandboxId:66ba99c8af289e788e9aa97aa463bbeec09c98cbc44cc6fd685aff9ece2cc687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241936071541681,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9381e247719c18d6691e17ec6054a636be76ac6e3cda059f343170a5021edac6,PodSandboxId:e7f0782e6d6c684dbec94e6a3219bf7a955c607c4980918f26af71b26860402a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241647657505331,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=875bad6a-bc1d-4b91-8f66-b9bf495c1561 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.722484563Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=288cea52-06ae-4b27-8df5-f23e574b62ce name=/runtime.v1.RuntimeService/Version
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.722557287Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=288cea52-06ae-4b27-8df5-f23e574b62ce name=/runtime.v1.RuntimeService/Version
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.723631180Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f9042b2-4270-4b89-a05f-d5d7ba7fa1fe name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.724169397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242500724145469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f9042b2-4270-4b89-a05f-d5d7ba7fa1fe name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.724778732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b749b67-0c7a-4474-a88b-279ee2dcb6c6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.724832606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b749b67-0c7a-4474-a88b-279ee2dcb6c6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.725146422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:218a44cd8585fbb83856c49696567afd594b4da967ac5ce50a0f632e2a6138cf,PodSandboxId:fb12b6b348e3e8568d69a1524584087652bbf96f2a5c845f8fda2ab30e641139,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241957906796398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9b11611-2008-4a15-a661-62809bd1d4c3,},Annotations:map[string]string{io.kubernetes.container.hash: a189e809,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5119186a70a760fe0c9b05022c775aaabe1a15791e247d7e841827098d306094,PodSandboxId:7d313062ed4075c4bf53961edb6b650038f88792ea8bcc9f3937e4a98ba438b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957428999810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn64r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35cbef26-555a-4693-afac-c739d9238a04,},Annotations:map[string]string{io.kubernetes.container.hash: 415218df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53dbf27ee711bde074b1abeee9bda1c0d830a983bf1acba2b6c8dfce83506a1,PodSandboxId:3407801315db4c603819b6fbd1e8c488045e35c148565ec53bbe65a53f31e252,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957235378366,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fp4tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: dc66092c-9183-4630-93cc-6ec4aa59a928,},Annotations:map[string]string{io.kubernetes.container.hash: 5b65e69b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d0f9b94a63b54376b5b3829bce9836e163d81fa80f24bf00e7f22b57d1a7a,PodSandboxId:455fc8fef39b80fd07b2de059ed7d5455df22677ec7846946cb948b87cbf9023,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721241956624394448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hnb5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 80fc2e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebbb2d90c0739141688761958d1119db0b157d52ffc853e1617aae7b4bf391,PodSandboxId:aabc1991466408493cace4e1341882e1ba856c5c65e55c8fb572ee9a32e8e302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172124193613116032
5,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b95a014b1974e2af4c29b922c88ba23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446730942de93d8fa246bfeb34d266f7bf40a70f2053eb3e9ac31212deff821,PodSandboxId:84fe57441a688f0d08a97f67b75df506036728b8fc5ada6ca6c0e0dbeec677ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:172124193614
5161953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f45fb335c5e2df14c04532f6497e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef9a4c788e9faf3a71500cb6e6711f5724fd07dbb7913c27ce756e69d8f30428,PodSandboxId:037fefa47cc3e2e9904b65a373b2dd771ffd70af156e34a05516c8f22a809237,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172124
1936105803776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b881b6fb22297dfca21c86875467d3,},Annotations:map[string]string{io.kubernetes.container.hash: 83137d99,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b9c11d9cadb0acdcc1067e825e408e9b1254ab6fea64f318e165d96850aa,PodSandboxId:66ba99c8af289e788e9aa97aa463bbeec09c98cbc44cc6fd685aff9ece2cc687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241936071541681,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9381e247719c18d6691e17ec6054a636be76ac6e3cda059f343170a5021edac6,PodSandboxId:e7f0782e6d6c684dbec94e6a3219bf7a955c607c4980918f26af71b26860402a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241647657505331,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b749b67-0c7a-4474-a88b-279ee2dcb6c6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.756538109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=760ba00b-0961-4a56-8264-4d36e309b675 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.756603129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=760ba00b-0961-4a56-8264-4d36e309b675 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.757434998Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1866edf0-a976-42f8-9879-09f0c3fe9d57 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.758106256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242500758082001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1866edf0-a976-42f8-9879-09f0c3fe9d57 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.758565564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e29b8f71-0b90-4d1a-a24e-4840c69901dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.758629191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e29b8f71-0b90-4d1a-a24e-4840c69901dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:00 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 18:55:00.758899218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:218a44cd8585fbb83856c49696567afd594b4da967ac5ce50a0f632e2a6138cf,PodSandboxId:fb12b6b348e3e8568d69a1524584087652bbf96f2a5c845f8fda2ab30e641139,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241957906796398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9b11611-2008-4a15-a661-62809bd1d4c3,},Annotations:map[string]string{io.kubernetes.container.hash: a189e809,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5119186a70a760fe0c9b05022c775aaabe1a15791e247d7e841827098d306094,PodSandboxId:7d313062ed4075c4bf53961edb6b650038f88792ea8bcc9f3937e4a98ba438b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957428999810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn64r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35cbef26-555a-4693-afac-c739d9238a04,},Annotations:map[string]string{io.kubernetes.container.hash: 415218df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53dbf27ee711bde074b1abeee9bda1c0d830a983bf1acba2b6c8dfce83506a1,PodSandboxId:3407801315db4c603819b6fbd1e8c488045e35c148565ec53bbe65a53f31e252,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957235378366,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fp4tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: dc66092c-9183-4630-93cc-6ec4aa59a928,},Annotations:map[string]string{io.kubernetes.container.hash: 5b65e69b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d0f9b94a63b54376b5b3829bce9836e163d81fa80f24bf00e7f22b57d1a7a,PodSandboxId:455fc8fef39b80fd07b2de059ed7d5455df22677ec7846946cb948b87cbf9023,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721241956624394448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hnb5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 80fc2e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebbb2d90c0739141688761958d1119db0b157d52ffc853e1617aae7b4bf391,PodSandboxId:aabc1991466408493cace4e1341882e1ba856c5c65e55c8fb572ee9a32e8e302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172124193613116032
5,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b95a014b1974e2af4c29b922c88ba23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446730942de93d8fa246bfeb34d266f7bf40a70f2053eb3e9ac31212deff821,PodSandboxId:84fe57441a688f0d08a97f67b75df506036728b8fc5ada6ca6c0e0dbeec677ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:172124193614
5161953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f45fb335c5e2df14c04532f6497e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef9a4c788e9faf3a71500cb6e6711f5724fd07dbb7913c27ce756e69d8f30428,PodSandboxId:037fefa47cc3e2e9904b65a373b2dd771ffd70af156e34a05516c8f22a809237,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172124
1936105803776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b881b6fb22297dfca21c86875467d3,},Annotations:map[string]string{io.kubernetes.container.hash: 83137d99,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b9c11d9cadb0acdcc1067e825e408e9b1254ab6fea64f318e165d96850aa,PodSandboxId:66ba99c8af289e788e9aa97aa463bbeec09c98cbc44cc6fd685aff9ece2cc687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241936071541681,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9381e247719c18d6691e17ec6054a636be76ac6e3cda059f343170a5021edac6,PodSandboxId:e7f0782e6d6c684dbec94e6a3219bf7a955c607c4980918f26af71b26860402a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241647657505331,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e29b8f71-0b90-4d1a-a24e-4840c69901dc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	218a44cd8585f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   fb12b6b348e3e       storage-provisioner
	5119186a70a76       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   7d313062ed407       coredns-7db6d8ff4d-jn64r
	d53dbf27ee711       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3407801315db4       coredns-7db6d8ff4d-fp4tg
	0f5d0f9b94a63       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   9 minutes ago       Running             kube-proxy                0                   455fc8fef39b8       kube-proxy-hnb5v
	a446730942de9       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   9 minutes ago       Running             kube-controller-manager   2                   84fe57441a688       kube-controller-manager-default-k8s-diff-port-022930
	26ebbb2d90c07       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   9 minutes ago       Running             kube-scheduler            2                   aabc199146640       kube-scheduler-default-k8s-diff-port-022930
	ef9a4c788e9fa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   037fefa47cc3e       etcd-default-k8s-diff-port-022930
	d8e6b9c11d9ca       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   9 minutes ago       Running             kube-apiserver            2                   66ba99c8af289       kube-apiserver-default-k8s-diff-port-022930
	9381e247719c1       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   14 minutes ago      Exited              kube-apiserver            1                   e7f0782e6d6c6       kube-apiserver-default-k8s-diff-port-022930
	
	
	==> coredns [5119186a70a760fe0c9b05022c775aaabe1a15791e247d7e841827098d306094] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d53dbf27ee711bde074b1abeee9bda1c0d830a983bf1acba2b6c8dfce83506a1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-022930
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-022930
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=default-k8s-diff-port-022930
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_45_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:45:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-022930
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:54:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:51:08 +0000   Wed, 17 Jul 2024 18:45:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:51:08 +0000   Wed, 17 Jul 2024 18:45:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:51:08 +0000   Wed, 17 Jul 2024 18:45:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:51:08 +0000   Wed, 17 Jul 2024 18:45:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.245
	  Hostname:    default-k8s-diff-port-022930
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1726fce1c58f432685c5f3f3c36f29de
	  System UUID:                1726fce1-c58f-4326-85c5-f3f3c36f29de
	  Boot ID:                    91256dde-6391-4dcc-8a3f-294e4be086b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fp4tg                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-7db6d8ff4d-jn64r                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-default-k8s-diff-port-022930                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-022930             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-022930    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-hnb5v                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	  kube-system                 kube-scheduler-default-k8s-diff-port-022930             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-pfmwt                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node default-k8s-diff-port-022930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node default-k8s-diff-port-022930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node default-k8s-diff-port-022930 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m6s   node-controller  Node default-k8s-diff-port-022930 event: Registered Node default-k8s-diff-port-022930 in Controller
	
	
	==> dmesg <==
	[  +0.052949] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048252] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.764057] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.954080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.389021] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.099061] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.058762] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065474] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.171592] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.165371] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.277171] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[  +4.139605] systemd-fstab-generator[800]: Ignoring "noauto" option for root device
	[  +1.493155] systemd-fstab-generator[922]: Ignoring "noauto" option for root device
	[  +0.065565] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.510432] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.615223] kauditd_printk_skb: 79 callbacks suppressed
	[Jul17 18:45] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.786552] systemd-fstab-generator[3583]: Ignoring "noauto" option for root device
	[  +4.386271] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.681400] systemd-fstab-generator[3908]: Ignoring "noauto" option for root device
	[ +14.826199] systemd-fstab-generator[4110]: Ignoring "noauto" option for root device
	[  +0.105221] kauditd_printk_skb: 14 callbacks suppressed
	[Jul17 18:47] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [ef9a4c788e9faf3a71500cb6e6711f5724fd07dbb7913c27ce756e69d8f30428] <==
	{"level":"info","ts":"2024-07-17T18:45:36.395166Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 switched to configuration voters=(9405602029447433462)"}
	{"level":"info","ts":"2024-07-17T18:45:36.395368Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e727aea1cd049c6","local-member-id":"8287693677e84cf6","added-peer-id":"8287693677e84cf6","added-peer-peer-urls":["https://192.168.50.245:2380"]}
	{"level":"info","ts":"2024-07-17T18:45:36.414274Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T18:45:36.414515Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8287693677e84cf6","initial-advertise-peer-urls":["https://192.168.50.245:2380"],"listen-peer-urls":["https://192.168.50.245:2380"],"advertise-client-urls":["https://192.168.50.245:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.245:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T18:45:36.414584Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T18:45:36.414744Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.245:2380"}
	{"level":"info","ts":"2024-07-17T18:45:36.414807Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.245:2380"}
	{"level":"info","ts":"2024-07-17T18:45:36.863793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T18:45:36.863868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T18:45:36.863897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 received MsgPreVoteResp from 8287693677e84cf6 at term 1"}
	{"level":"info","ts":"2024-07-17T18:45:36.86391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:36.863918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 received MsgVoteResp from 8287693677e84cf6 at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:36.863935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:36.863947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8287693677e84cf6 elected leader 8287693677e84cf6 at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:36.868028Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:36.871956Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8287693677e84cf6","local-member-attributes":"{Name:default-k8s-diff-port-022930 ClientURLs:[https://192.168.50.245:2379]}","request-path":"/0/members/8287693677e84cf6/attributes","cluster-id":"6e727aea1cd049c6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:45:36.873794Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e727aea1cd049c6","local-member-id":"8287693677e84cf6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:36.873959Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:36.873992Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:36.874058Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:45:36.88478Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:45:36.885543Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.245:2379"}
	{"level":"info","ts":"2024-07-17T18:45:36.891782Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:45:36.897824Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T18:45:36.91197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:55:01 up 14 min,  0 users,  load average: 0.12, 0.25, 0.19
	Linux default-k8s-diff-port-022930 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9381e247719c18d6691e17ec6054a636be76ac6e3cda059f343170a5021edac6] <==
	W0717 18:45:27.789563       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.801014       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.804445       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.817902       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.856336       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.894776       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.907942       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.975805       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:28.016961       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:28.045172       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:28.170892       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:28.258995       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:28.378868       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:28.602072       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:31.721801       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.181991       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.262818       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.427102       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.518946       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.576976       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.754347       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.757871       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.849280       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.974281       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.981869       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d8e6b9c11d9cadb0acdcc1067e825e408e9b1254ab6fea64f318e165d96850aa] <==
	I0717 18:48:58.237007       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:50:38.789248       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:50:38.789385       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 18:50:39.789519       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:50:39.789578       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 18:50:39.789586       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:50:39.789692       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:50:39.789834       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 18:50:39.791033       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:51:39.790867       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:51:39.790942       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 18:51:39.790954       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:51:39.791260       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:51:39.791407       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 18:51:39.793104       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:53:39.791549       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:53:39.791633       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 18:53:39.791650       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:53:39.793831       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:53:39.793964       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 18:53:39.793993       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a446730942de93d8fa246bfeb34d266f7bf40a70f2053eb3e9ac31212deff821] <==
	I0717 18:49:25.799191       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:49:55.357370       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:49:55.807152       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:50:25.362380       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:50:25.815022       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:50:55.368201       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:50:55.823030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:51:25.374154       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:51:25.832942       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 18:51:52.532252       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="318.585µs"
	E0717 18:51:55.379871       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:51:55.840412       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 18:52:03.530866       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="230.879µs"
	E0717 18:52:25.387015       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:52:25.849842       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:52:55.393043       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:52:55.857423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:53:25.397793       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:53:25.864890       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:53:55.402873       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:53:55.873383       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:54:25.407458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:54:25.882026       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:54:55.413564       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:54:55.889506       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0f5d0f9b94a63b54376b5b3829bce9836e163d81fa80f24bf00e7f22b57d1a7a] <==
	I0717 18:45:56.926246       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:45:56.940493       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.245"]
	I0717 18:45:57.001049       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:45:57.001086       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:45:57.001101       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:45:57.006162       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:45:57.006401       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:45:57.006413       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:45:57.008143       1 config.go:192] "Starting service config controller"
	I0717 18:45:57.008154       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:45:57.008195       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:45:57.008200       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:45:57.008575       1 config.go:319] "Starting node config controller"
	I0717 18:45:57.008583       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:45:57.108850       1 shared_informer.go:320] Caches are synced for node config
	I0717 18:45:57.108948       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:45:57.109001       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [26ebbb2d90c0739141688761958d1119db0b157d52ffc853e1617aae7b4bf391] <==
	W0717 18:45:38.811327       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:45:38.811349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:45:38.811386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:45:38.811406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:45:38.811603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:45:38.811693       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 18:45:39.622254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:45:39.622319       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:45:39.648065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:45:39.648262       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:45:39.674111       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:45:39.674223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 18:45:39.790761       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:45:39.790880       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:45:39.853201       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:45:39.853283       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:45:39.896746       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:45:39.896961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:45:39.922274       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:45:39.922360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 18:45:39.947780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:45:39.948924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 18:45:39.985747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:45:39.986101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0717 18:45:41.900643       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 18:52:41 default-k8s-diff-port-022930 kubelet[3915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:52:41 default-k8s-diff-port-022930 kubelet[3915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:52:41 default-k8s-diff-port-022930 kubelet[3915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:52:41 default-k8s-diff-port-022930 kubelet[3915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:52:43 default-k8s-diff-port-022930 kubelet[3915]: E0717 18:52:43.515392    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 18:52:56 default-k8s-diff-port-022930 kubelet[3915]: E0717 18:52:56.515146    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 18:53:09 default-k8s-diff-port-022930 kubelet[3915]: E0717 18:53:09.514977    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 18:53:23 default-k8s-diff-port-022930 kubelet[3915]: E0717 18:53:23.514978    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 18:53:38 default-k8s-diff-port-022930 kubelet[3915]: E0717 18:53:38.515237    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 18:53:41 default-k8s-diff-port-022930 kubelet[3915]: E0717 18:53:41.542943    3915 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:53:41 default-k8s-diff-port-022930 kubelet[3915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:53:41 default-k8s-diff-port-022930 kubelet[3915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:53:41 default-k8s-diff-port-022930 kubelet[3915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:53:41 default-k8s-diff-port-022930 kubelet[3915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:53:50 default-k8s-diff-port-022930 kubelet[3915]: E0717 18:53:50.514548    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 18:54:02 default-k8s-diff-port-022930 kubelet[3915]: E0717 18:54:02.514272    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 18:54:14 default-k8s-diff-port-022930 kubelet[3915]: E0717 18:54:14.514994    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 18:54:27 default-k8s-diff-port-022930 kubelet[3915]: E0717 18:54:27.515253    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 18:54:41 default-k8s-diff-port-022930 kubelet[3915]: E0717 18:54:41.516005    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 18:54:41 default-k8s-diff-port-022930 kubelet[3915]: E0717 18:54:41.541172    3915 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:54:41 default-k8s-diff-port-022930 kubelet[3915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:54:41 default-k8s-diff-port-022930 kubelet[3915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:54:41 default-k8s-diff-port-022930 kubelet[3915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:54:41 default-k8s-diff-port-022930 kubelet[3915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:54:55 default-k8s-diff-port-022930 kubelet[3915]: E0717 18:54:55.515076    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	
	
	==> storage-provisioner [218a44cd8585fbb83856c49696567afd594b4da967ac5ce50a0f632e2a6138cf] <==
	I0717 18:45:58.006792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:45:58.015577       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:45:58.015616       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:45:58.026335       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:45:58.028218       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-022930_0ef1e56d-bcfe-49d9-8bc8-60eb7d40d4bb!
	I0717 18:45:58.029070       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b39b949f-dc71-4797-979a-a1feb97bb555", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-022930_0ef1e56d-bcfe-49d9-8bc8-60eb7d40d4bb became leader
	I0717 18:45:58.128686       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-022930_0ef1e56d-bcfe-49d9-8bc8-60eb7d40d4bb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-022930 -n default-k8s-diff-port-022930
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-022930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-pfmwt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-022930 describe pod metrics-server-569cc877fc-pfmwt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-022930 describe pod metrics-server-569cc877fc-pfmwt: exit status 1 (61.333134ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-pfmwt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-022930 describe pod metrics-server-569cc877fc-pfmwt: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 18:47:09.805689   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:47:30.705909   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory
E0717 18:47:59.763178   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:48:21.395396   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-527415 -n embed-certs-527415
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-17 18:55:20.400328602 +0000 UTC m=+6235.129524559
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-527415 -n embed-certs-527415
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-527415 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-527415 logs -n 25: (2.101130086s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-527415            | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-371172                                        | pause-371172                 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-341716 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | disable-driver-mounts-341716                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:34 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-066175             | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC | 17 Jul 24 18:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-066175                                   | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-022930  | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC | 17 Jul 24 18:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-527415                 | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-019549        | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-066175                  | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-066175 --memory=2200                     | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:45 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-019549             | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-022930       | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC | 17 Jul 24 18:45 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:37:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:37:14.473404   81068 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:37:14.473526   81068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:14.473535   81068 out.go:304] Setting ErrFile to fd 2...
	I0717 18:37:14.473540   81068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:14.473714   81068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:37:14.474251   81068 out.go:298] Setting JSON to false
	I0717 18:37:14.475115   81068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8377,"bootTime":1721233057,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:37:14.475172   81068 start.go:139] virtualization: kvm guest
	I0717 18:37:14.477356   81068 out.go:177] * [default-k8s-diff-port-022930] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:37:14.478600   81068 notify.go:220] Checking for updates...
	I0717 18:37:14.478615   81068 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:37:14.480094   81068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:37:14.481516   81068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:37:14.482886   81068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:37:14.484159   81068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:37:14.485449   81068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:37:14.487164   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:37:14.487744   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:14.487795   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:14.502368   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0717 18:37:14.502712   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:14.503192   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:37:14.503213   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:14.503574   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:14.503778   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:37:14.504032   81068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:37:14.504326   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:14.504381   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:14.518330   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0717 18:37:14.518718   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:14.519095   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:37:14.519114   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:14.519409   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:14.519578   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:37:14.549923   81068 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:37:14.551160   81068 start.go:297] selected driver: kvm2
	I0717 18:37:14.551175   81068 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:37:14.551302   81068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:37:14.551931   81068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:37:14.552008   81068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:37:14.566038   81068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:37:14.566371   81068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:37:14.566443   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:37:14.566466   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:37:14.566516   81068 start.go:340] cluster config:
	{Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:37:14.566643   81068 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:37:14.568602   81068 out.go:177] * Starting "default-k8s-diff-port-022930" primary control-plane node in "default-k8s-diff-port-022930" cluster
	I0717 18:37:13.057187   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:16.129274   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:14.569868   81068 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:37:14.569908   81068 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:37:14.569919   81068 cache.go:56] Caching tarball of preloaded images
	I0717 18:37:14.569992   81068 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:37:14.570003   81068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:37:14.570100   81068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/config.json ...
	I0717 18:37:14.570277   81068 start.go:360] acquireMachinesLock for default-k8s-diff-port-022930: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:37:22.209207   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:25.281226   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:31.361221   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:34.433258   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:40.513234   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:43.585225   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:49.665198   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:52.737256   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:58.817201   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:01.889213   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:07.969247   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:11.041264   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:17.121227   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:20.193250   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:26.273206   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:29.345193   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:35.425259   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:38.497261   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:44.577185   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:47.649306   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:53.729234   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:56.801257   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:02.881239   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:05.953258   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:12.033251   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:15.105230   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:21.185200   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:24.257195   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:30.337181   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:33.409224   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:39.489219   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:42.561250   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:45.565739   80401 start.go:364] duration metric: took 4m11.345351864s to acquireMachinesLock for "no-preload-066175"
	I0717 18:39:45.565801   80401 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:39:45.565807   80401 fix.go:54] fixHost starting: 
	I0717 18:39:45.566167   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:39:45.566198   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:39:45.580996   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45665
	I0717 18:39:45.581389   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:39:45.581797   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:39:45.581817   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:39:45.582145   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:39:45.582323   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:39:45.582467   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:39:45.584074   80401 fix.go:112] recreateIfNeeded on no-preload-066175: state=Stopped err=<nil>
	I0717 18:39:45.584109   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	W0717 18:39:45.584260   80401 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:39:45.586842   80401 out.go:177] * Restarting existing kvm2 VM for "no-preload-066175" ...
	I0717 18:39:45.563046   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:39:45.563105   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:39:45.563521   80180 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:39:45.563555   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:39:45.563758   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:39:45.565594   80180 machine.go:97] duration metric: took 4m37.427146226s to provisionDockerMachine
	I0717 18:39:45.565643   80180 fix.go:56] duration metric: took 4m37.448013968s for fixHost
	I0717 18:39:45.565651   80180 start.go:83] releasing machines lock for "embed-certs-527415", held for 4m37.448033785s
	W0717 18:39:45.565675   80180 start.go:714] error starting host: provision: host is not running
	W0717 18:39:45.565775   80180 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 18:39:45.565784   80180 start.go:729] Will try again in 5 seconds ...
	I0717 18:39:45.587901   80401 main.go:141] libmachine: (no-preload-066175) Calling .Start
	I0717 18:39:45.588046   80401 main.go:141] libmachine: (no-preload-066175) Ensuring networks are active...
	I0717 18:39:45.588666   80401 main.go:141] libmachine: (no-preload-066175) Ensuring network default is active
	I0717 18:39:45.589012   80401 main.go:141] libmachine: (no-preload-066175) Ensuring network mk-no-preload-066175 is active
	I0717 18:39:45.589386   80401 main.go:141] libmachine: (no-preload-066175) Getting domain xml...
	I0717 18:39:45.589959   80401 main.go:141] libmachine: (no-preload-066175) Creating domain...
	I0717 18:39:46.785717   80401 main.go:141] libmachine: (no-preload-066175) Waiting to get IP...
	I0717 18:39:46.786495   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:46.786912   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:46.786974   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:46.786888   81612 retry.go:31] will retry after 301.458026ms: waiting for machine to come up
	I0717 18:39:47.090556   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.091129   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.091154   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.091098   81612 retry.go:31] will retry after 347.107185ms: waiting for machine to come up
	I0717 18:39:47.439530   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.440010   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.440033   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.439947   81612 retry.go:31] will retry after 436.981893ms: waiting for machine to come up
	I0717 18:39:47.878684   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.879091   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.879120   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.879051   81612 retry.go:31] will retry after 582.942833ms: waiting for machine to come up
	I0717 18:39:48.464068   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:48.464568   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:48.464593   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:48.464513   81612 retry.go:31] will retry after 633.101908ms: waiting for machine to come up
	I0717 18:39:49.099383   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:49.099762   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:49.099784   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:49.099720   81612 retry.go:31] will retry after 847.181679ms: waiting for machine to come up
	I0717 18:39:50.567294   80180 start.go:360] acquireMachinesLock for embed-certs-527415: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:39:49.948696   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:49.949228   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:49.949260   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:49.949188   81612 retry.go:31] will retry after 1.048891217s: waiting for machine to come up
	I0717 18:39:50.999658   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:51.000062   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:51.000099   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:51.000001   81612 retry.go:31] will retry after 942.285454ms: waiting for machine to come up
	I0717 18:39:51.944171   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:51.944676   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:51.944702   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:51.944632   81612 retry.go:31] will retry after 1.21768861s: waiting for machine to come up
	I0717 18:39:53.163883   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:53.164345   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:53.164368   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:53.164305   81612 retry.go:31] will retry after 1.505905193s: waiting for machine to come up
	I0717 18:39:54.671532   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:54.671951   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:54.671977   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:54.671918   81612 retry.go:31] will retry after 2.885547597s: waiting for machine to come up
	I0717 18:39:57.560375   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:57.560878   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:57.560902   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:57.560830   81612 retry.go:31] will retry after 3.53251124s: waiting for machine to come up
	I0717 18:40:02.249487   80857 start.go:364] duration metric: took 3m17.095542929s to acquireMachinesLock for "old-k8s-version-019549"
	I0717 18:40:02.249548   80857 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:02.249556   80857 fix.go:54] fixHost starting: 
	I0717 18:40:02.249946   80857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:02.249976   80857 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:02.269365   80857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0717 18:40:02.269715   80857 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:02.270182   80857 main.go:141] libmachine: Using API Version  1
	I0717 18:40:02.270205   80857 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:02.270534   80857 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:02.270738   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:02.270875   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetState
	I0717 18:40:02.272408   80857 fix.go:112] recreateIfNeeded on old-k8s-version-019549: state=Stopped err=<nil>
	I0717 18:40:02.272443   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	W0717 18:40:02.272597   80857 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:02.274702   80857 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-019549" ...
	I0717 18:40:01.094975   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.095556   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has current primary IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.095579   80401 main.go:141] libmachine: (no-preload-066175) Found IP for machine: 192.168.72.216
	I0717 18:40:01.095592   80401 main.go:141] libmachine: (no-preload-066175) Reserving static IP address...
	I0717 18:40:01.095955   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.095980   80401 main.go:141] libmachine: (no-preload-066175) DBG | skip adding static IP to network mk-no-preload-066175 - found existing host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"}
	I0717 18:40:01.095989   80401 main.go:141] libmachine: (no-preload-066175) Reserved static IP address: 192.168.72.216
	I0717 18:40:01.096000   80401 main.go:141] libmachine: (no-preload-066175) Waiting for SSH to be available...
	I0717 18:40:01.096010   80401 main.go:141] libmachine: (no-preload-066175) DBG | Getting to WaitForSSH function...
	I0717 18:40:01.098163   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.098498   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.098521   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.098631   80401 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH client type: external
	I0717 18:40:01.098657   80401 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa (-rw-------)
	I0717 18:40:01.098692   80401 main.go:141] libmachine: (no-preload-066175) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:01.098707   80401 main.go:141] libmachine: (no-preload-066175) DBG | About to run SSH command:
	I0717 18:40:01.098720   80401 main.go:141] libmachine: (no-preload-066175) DBG | exit 0
	I0717 18:40:01.216740   80401 main.go:141] libmachine: (no-preload-066175) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:01.217099   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetConfigRaw
	I0717 18:40:01.217706   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:01.220160   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.220461   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.220492   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.220656   80401 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/config.json ...
	I0717 18:40:01.220843   80401 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:01.220860   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:01.221067   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.223044   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.223347   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.223371   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.223531   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.223719   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.223864   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.223980   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.224125   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.224332   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.224345   80401 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:01.321053   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:01.321083   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.321333   80401 buildroot.go:166] provisioning hostname "no-preload-066175"
	I0717 18:40:01.321359   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.321529   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.323945   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.324269   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.324297   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.324421   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.324582   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.324724   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.324837   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.324996   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.325162   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.325175   80401 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-066175 && echo "no-preload-066175" | sudo tee /etc/hostname
	I0717 18:40:01.435003   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-066175
	
	I0717 18:40:01.435033   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.437795   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.438113   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.438155   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.438344   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.438533   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.438692   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.438803   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.438948   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.439094   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.439108   80401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-066175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-066175/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-066175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:01.540598   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:01.540631   80401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:01.540650   80401 buildroot.go:174] setting up certificates
	I0717 18:40:01.540660   80401 provision.go:84] configureAuth start
	I0717 18:40:01.540669   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.540977   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:01.543503   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.543788   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.543817   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.543907   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.545954   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.546261   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.546280   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.546415   80401 provision.go:143] copyHostCerts
	I0717 18:40:01.546483   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:01.546498   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:01.546596   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:01.546730   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:01.546743   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:01.546788   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:01.546878   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:01.546888   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:01.546921   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:01.547054   80401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.no-preload-066175 san=[127.0.0.1 192.168.72.216 localhost minikube no-preload-066175]
	I0717 18:40:01.628522   80401 provision.go:177] copyRemoteCerts
	I0717 18:40:01.628574   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:01.628596   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.631306   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.631714   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.631761   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.631876   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.632050   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.632210   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.632330   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:01.711344   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:01.738565   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 18:40:01.765888   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:40:01.790852   80401 provision.go:87] duration metric: took 250.181586ms to configureAuth
	I0717 18:40:01.790874   80401 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:01.791046   80401 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:40:01.791111   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.793530   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.793922   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.793945   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.794095   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.794323   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.794497   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.794635   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.794786   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.794955   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.794969   80401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:02.032506   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:02.032543   80401 machine.go:97] duration metric: took 811.687511ms to provisionDockerMachine
	I0717 18:40:02.032554   80401 start.go:293] postStartSetup for "no-preload-066175" (driver="kvm2")
	I0717 18:40:02.032567   80401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:02.032596   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.032921   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:02.032966   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.035429   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.035731   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.035767   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.035921   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.036081   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.036351   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.036493   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.114601   80401 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:02.118230   80401 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:02.118247   80401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:02.118308   80401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:02.118384   80401 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:02.118592   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:02.126753   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:02.148028   80401 start.go:296] duration metric: took 115.461293ms for postStartSetup
	I0717 18:40:02.148066   80401 fix.go:56] duration metric: took 16.582258787s for fixHost
	I0717 18:40:02.148084   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.150550   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.150917   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.150949   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.151061   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.151242   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.151394   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.151513   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.151658   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:02.151828   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:02.151841   80401 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:02.249303   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241602.223072082
	
	I0717 18:40:02.249334   80401 fix.go:216] guest clock: 1721241602.223072082
	I0717 18:40:02.249344   80401 fix.go:229] Guest: 2024-07-17 18:40:02.223072082 +0000 UTC Remote: 2024-07-17 18:40:02.14806999 +0000 UTC m=+268.060359078 (delta=75.002092ms)
	I0717 18:40:02.249388   80401 fix.go:200] guest clock delta is within tolerance: 75.002092ms
	I0717 18:40:02.249396   80401 start.go:83] releasing machines lock for "no-preload-066175", held for 16.683615057s
	I0717 18:40:02.249442   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.249735   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:02.252545   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.252896   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.252929   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.253053   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253516   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253700   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253770   80401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:02.253803   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.253913   80401 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:02.253937   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.256152   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256462   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.256501   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256558   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.256616   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256718   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.256879   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.257013   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.257021   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.257038   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.257158   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.257312   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.257469   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.257604   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.376103   80401 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:02.381639   80401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:02.529357   80401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:02.536396   80401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:02.536463   80401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:02.555045   80401 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:02.555067   80401 start.go:495] detecting cgroup driver to use...
	I0717 18:40:02.555130   80401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:02.570540   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:02.583804   80401 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:02.583867   80401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:02.596657   80401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:02.610371   80401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:02.717489   80401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:02.875146   80401 docker.go:233] disabling docker service ...
	I0717 18:40:02.875235   80401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:02.895657   80401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:02.908366   80401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:03.018375   80401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:03.143922   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:03.160599   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:03.180643   80401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 18:40:03.180709   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.190040   80401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:03.190097   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.199275   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.208647   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.217750   80401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:03.226808   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.235779   80401 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.251451   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.261476   80401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:03.269978   80401 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:03.270028   80401 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:03.280901   80401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:03.290184   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:03.409167   80401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:03.541153   80401 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:03.541218   80401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:03.546012   80401 start.go:563] Will wait 60s for crictl version
	I0717 18:40:03.546059   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:03.549567   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:03.588396   80401 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:03.588467   80401 ssh_runner.go:195] Run: crio --version
	I0717 18:40:03.622472   80401 ssh_runner.go:195] Run: crio --version
	I0717 18:40:03.652180   80401 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 18:40:03.653613   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:03.656560   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:03.656959   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:03.656987   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:03.657222   80401 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:03.661102   80401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:03.673078   80401 kubeadm.go:883] updating cluster {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:03.673212   80401 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 18:40:03.673248   80401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:03.703959   80401 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 18:40:03.703986   80401 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:40:03.704042   80401 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:03.704078   80401 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.704095   80401 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.704114   80401 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.704150   80401 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.704077   80401 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.704168   80401 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 18:40:03.704243   80401 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.705787   80401 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:03.705795   80401 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.705801   80401 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.705787   80401 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.705792   80401 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.705816   80401 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.705829   80401 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 18:40:03.706094   80401 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.925413   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.930827   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 18:40:03.963901   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.964215   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.966162   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.970852   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.973664   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.997849   80401 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 18:40:03.997912   80401 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.997969   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118851   80401 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 18:40:04.118888   80401 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:04.118892   80401 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 18:40:04.118924   80401 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:04.118934   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118943   80401 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 18:40:04.118969   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118969   80401 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:04.119001   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.119027   80401 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 18:40:04.119058   80401 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:04.119089   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:04.119104   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.119065   80401 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 18:40:04.119136   80401 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:04.119159   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:02.275985   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .Start
	I0717 18:40:02.276143   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring networks are active...
	I0717 18:40:02.276898   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network default is active
	I0717 18:40:02.277333   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network mk-old-k8s-version-019549 is active
	I0717 18:40:02.277796   80857 main.go:141] libmachine: (old-k8s-version-019549) Getting domain xml...
	I0717 18:40:02.278481   80857 main.go:141] libmachine: (old-k8s-version-019549) Creating domain...
	I0717 18:40:03.571325   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting to get IP...
	I0717 18:40:03.572359   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.572836   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.572968   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.572816   81751 retry.go:31] will retry after 301.991284ms: waiting for machine to come up
	I0717 18:40:03.876263   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.876688   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.876715   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.876637   81751 retry.go:31] will retry after 286.461163ms: waiting for machine to come up
	I0717 18:40:04.165366   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.165873   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.165902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.165811   81751 retry.go:31] will retry after 383.479108ms: waiting for machine to come up
	I0717 18:40:04.551152   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.551615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.551650   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.551589   81751 retry.go:31] will retry after 429.076714ms: waiting for machine to come up
	I0717 18:40:04.982157   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.982517   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.982545   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.982470   81751 retry.go:31] will retry after 553.684035ms: waiting for machine to come up
	I0717 18:40:04.122952   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:04.130590   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:04.130741   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:04.200609   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:04.200631   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:04.200643   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 18:40:04.200728   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:04.200741   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.200815   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.212034   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 18:40:04.212057   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 18:40:04.212113   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:04.212123   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:04.259447   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:04.259525   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 18:40:04.259548   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.259552   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:04.259553   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 18:40:04.259534   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:04.259588   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.259591   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 18:40:04.259628   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 18:40:04.259639   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:04.550060   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:06.236639   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.976976668s)
	I0717 18:40:06.236683   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 18:40:06.236691   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.97711629s)
	I0717 18:40:06.236718   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 18:40:06.236732   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.977125153s)
	I0717 18:40:06.236752   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 18:40:06.236776   80401 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:06.236854   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:06.236781   80401 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.68669473s)
	I0717 18:40:06.236908   80401 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 18:40:06.236951   80401 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:06.236994   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:08.107122   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.870244887s)
	I0717 18:40:08.107152   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 18:40:08.107175   80401 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:08.107203   80401 ssh_runner.go:235] Completed: which crictl: (1.870188554s)
	I0717 18:40:08.107224   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:08.107261   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:08.146817   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 18:40:08.146932   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:05.538229   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:05.538753   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:05.538777   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:05.538702   81751 retry.go:31] will retry after 747.130907ms: waiting for machine to come up
	I0717 18:40:06.287146   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:06.287626   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:06.287665   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:06.287581   81751 retry.go:31] will retry after 1.171580264s: waiting for machine to come up
	I0717 18:40:07.461393   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:07.462015   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:07.462046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:07.461963   81751 retry.go:31] will retry after 1.199265198s: waiting for machine to come up
	I0717 18:40:08.663340   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:08.663789   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:08.663815   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:08.663745   81751 retry.go:31] will retry after 1.621895351s: waiting for machine to come up
	I0717 18:40:11.404193   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.296944718s)
	I0717 18:40:11.404228   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 18:40:11.404248   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:11.404245   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.257289666s)
	I0717 18:40:11.404272   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 18:40:11.404294   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:13.370389   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966067238s)
	I0717 18:40:13.370426   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 18:40:13.370455   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:13.370505   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:10.287596   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:10.288019   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:10.288046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:10.287964   81751 retry.go:31] will retry after 1.748504204s: waiting for machine to come up
	I0717 18:40:12.038137   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:12.038582   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:12.038615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:12.038532   81751 retry.go:31] will retry after 2.477996004s: waiting for machine to come up
	I0717 18:40:14.517788   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:14.518175   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:14.518203   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:14.518123   81751 retry.go:31] will retry after 3.29313184s: waiting for machine to come up
	I0717 18:40:19.093608   81068 start.go:364] duration metric: took 3m4.523289209s to acquireMachinesLock for "default-k8s-diff-port-022930"
	I0717 18:40:19.093694   81068 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:19.093705   81068 fix.go:54] fixHost starting: 
	I0717 18:40:19.094122   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:19.094157   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:19.113793   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0717 18:40:19.114236   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:19.114755   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:40:19.114775   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:19.115110   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:19.115294   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:19.115434   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:40:19.117072   81068 fix.go:112] recreateIfNeeded on default-k8s-diff-port-022930: state=Stopped err=<nil>
	I0717 18:40:19.117109   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	W0717 18:40:19.117256   81068 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:19.120986   81068 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-022930" ...
	I0717 18:40:15.214734   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.844202729s)
	I0717 18:40:15.214756   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 18:40:15.214777   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:15.214814   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:17.066570   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.851726063s)
	I0717 18:40:17.066604   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 18:40:17.066629   80401 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:17.066679   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:17.703556   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 18:40:17.703614   80401 cache_images.go:123] Successfully loaded all cached images
	I0717 18:40:17.703624   80401 cache_images.go:92] duration metric: took 13.999623105s to LoadCachedImages
	I0717 18:40:17.703638   80401 kubeadm.go:934] updating node { 192.168.72.216 8443 v1.31.0-beta.0 crio true true} ...
	I0717 18:40:17.703754   80401 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-066175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:17.703830   80401 ssh_runner.go:195] Run: crio config
	I0717 18:40:17.753110   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:40:17.753138   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:17.753159   80401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:17.753190   80401 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.216 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-066175 NodeName:no-preload-066175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:40:17.753404   80401 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-066175"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:17.753492   80401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 18:40:17.763417   80401 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:17.763491   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:17.772139   80401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 18:40:17.786982   80401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 18:40:17.801327   80401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 18:40:17.816796   80401 ssh_runner.go:195] Run: grep 192.168.72.216	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:17.820354   80401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:17.834155   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:17.970222   80401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:17.989953   80401 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175 for IP: 192.168.72.216
	I0717 18:40:17.989977   80401 certs.go:194] generating shared ca certs ...
	I0717 18:40:17.989998   80401 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:17.990160   80401 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:17.990217   80401 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:17.990231   80401 certs.go:256] generating profile certs ...
	I0717 18:40:17.990365   80401 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.key
	I0717 18:40:17.990460   80401 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672
	I0717 18:40:17.990509   80401 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key
	I0717 18:40:17.990679   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:17.990723   80401 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:17.990740   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:17.990772   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:17.990813   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:17.990846   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:17.990905   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:17.991590   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:18.035349   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:18.079539   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:18.110382   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:18.135920   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:40:18.168675   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:18.196132   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:18.230418   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:40:18.254319   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:18.277293   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:18.301416   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:18.330021   80401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:18.348803   80401 ssh_runner.go:195] Run: openssl version
	I0717 18:40:18.355126   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:18.366004   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.370221   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.370287   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.375799   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:18.385991   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:18.396141   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.400451   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.400526   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.406203   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:18.419059   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:18.429450   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.433742   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.433794   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.439261   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:18.450327   80401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:18.454734   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:18.460256   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:18.465766   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:18.471349   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:18.476780   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:18.482509   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:18.488138   80401 kubeadm.go:392] StartCluster: {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:18.488229   80401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:18.488270   80401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:18.532219   80401 cri.go:89] found id: ""
	I0717 18:40:18.532318   80401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:18.542632   80401 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:18.542655   80401 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:18.542699   80401 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:18.552352   80401 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:18.553351   80401 kubeconfig.go:125] found "no-preload-066175" server: "https://192.168.72.216:8443"
	I0717 18:40:18.555295   80401 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:18.565857   80401 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.216
	I0717 18:40:18.565892   80401 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:18.565905   80401 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:18.565958   80401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:18.605512   80401 cri.go:89] found id: ""
	I0717 18:40:18.605593   80401 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:18.622235   80401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:18.633175   80401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:18.633196   80401 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:18.633241   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:40:18.641969   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:18.642023   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:18.651017   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:40:18.659619   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:18.659667   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:18.668008   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:40:18.675985   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:18.676037   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:18.685937   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:40:18.695574   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:18.695624   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:18.706040   80401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:18.717397   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:18.836009   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:19.122366   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Start
	I0717 18:40:19.122530   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring networks are active...
	I0717 18:40:19.123330   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring network default is active
	I0717 18:40:19.123832   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring network mk-default-k8s-diff-port-022930 is active
	I0717 18:40:19.124268   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Getting domain xml...
	I0717 18:40:19.124922   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Creating domain...
	I0717 18:40:17.813673   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814213   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has current primary IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814242   80857 main.go:141] libmachine: (old-k8s-version-019549) Found IP for machine: 192.168.39.128
	I0717 18:40:17.814277   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserving static IP address...
	I0717 18:40:17.814720   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserved static IP address: 192.168.39.128
	I0717 18:40:17.814738   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting for SSH to be available...
	I0717 18:40:17.814762   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.814783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | skip adding static IP to network mk-old-k8s-version-019549 - found existing host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"}
	I0717 18:40:17.814796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Getting to WaitForSSH function...
	I0717 18:40:17.817314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817714   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.817743   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH client type: external
	I0717 18:40:17.817944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa (-rw-------)
	I0717 18:40:17.817971   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:17.817984   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | About to run SSH command:
	I0717 18:40:17.818000   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | exit 0
	I0717 18:40:17.945902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:17.946262   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetConfigRaw
	I0717 18:40:17.946907   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:17.949757   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950158   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.950178   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950474   80857 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/config.json ...
	I0717 18:40:17.950706   80857 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:17.950728   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:17.950941   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:17.953738   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954141   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.954184   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954282   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:17.954456   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954617   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954790   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:17.954957   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:17.955121   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:17.955131   80857 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:18.061082   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:18.061113   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061405   80857 buildroot.go:166] provisioning hostname "old-k8s-version-019549"
	I0717 18:40:18.061432   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061685   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.064855   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.065348   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065537   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.065777   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.065929   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.066118   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.066329   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.066547   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.066564   80857 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-019549 && echo "old-k8s-version-019549" | sudo tee /etc/hostname
	I0717 18:40:18.191467   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-019549
	
	I0717 18:40:18.191517   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.194917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195455   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.195502   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195714   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.195908   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196105   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196288   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.196483   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.196708   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.196731   80857 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-019549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-019549/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-019549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:18.315020   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:18.315047   80857 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:18.315065   80857 buildroot.go:174] setting up certificates
	I0717 18:40:18.315078   80857 provision.go:84] configureAuth start
	I0717 18:40:18.315090   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.315358   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:18.318342   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.318796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.318826   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.319078   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.321562   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.321914   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.321944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.322125   80857 provision.go:143] copyHostCerts
	I0717 18:40:18.322208   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:18.322226   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:18.322309   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:18.322443   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:18.322457   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:18.322492   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:18.322579   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:18.322591   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:18.322621   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:18.322727   80857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-019549 san=[127.0.0.1 192.168.39.128 localhost minikube old-k8s-version-019549]
	I0717 18:40:18.397216   80857 provision.go:177] copyRemoteCerts
	I0717 18:40:18.397266   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:18.397301   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.399887   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400237   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.400286   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400531   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.400732   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.400880   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.401017   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.490677   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:18.518392   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 18:40:18.543930   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:18.567339   80857 provision.go:87] duration metric: took 252.250106ms to configureAuth
	I0717 18:40:18.567360   80857 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:18.567539   80857 config.go:182] Loaded profile config "old-k8s-version-019549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 18:40:18.567610   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.570373   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.570809   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570943   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.571140   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571281   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.571624   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.571841   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.571862   80857 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:18.845725   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:18.845752   80857 machine.go:97] duration metric: took 895.03234ms to provisionDockerMachine
	I0717 18:40:18.845765   80857 start.go:293] postStartSetup for "old-k8s-version-019549" (driver="kvm2")
	I0717 18:40:18.845778   80857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:18.845828   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:18.846158   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:18.846192   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.848760   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849264   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.849293   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.849649   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.849843   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.850007   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.938026   80857 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:18.943223   80857 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:18.943254   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:18.943317   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:18.943417   80857 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:18.943509   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:18.954887   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:18.976980   80857 start.go:296] duration metric: took 131.200877ms for postStartSetup
	I0717 18:40:18.977022   80857 fix.go:56] duration metric: took 16.727466541s for fixHost
	I0717 18:40:18.977041   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.980020   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980384   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.980417   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980533   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.980723   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.980903   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.981059   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.981207   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.981406   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.981418   80857 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:19.093409   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241619.063415252
	
	I0717 18:40:19.093433   80857 fix.go:216] guest clock: 1721241619.063415252
	I0717 18:40:19.093443   80857 fix.go:229] Guest: 2024-07-17 18:40:19.063415252 +0000 UTC Remote: 2024-07-17 18:40:18.97702579 +0000 UTC m=+213.960604949 (delta=86.389462ms)
	I0717 18:40:19.093494   80857 fix.go:200] guest clock delta is within tolerance: 86.389462ms
	I0717 18:40:19.093506   80857 start.go:83] releasing machines lock for "old-k8s-version-019549", held for 16.843984035s
	I0717 18:40:19.093543   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.093842   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:19.096443   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.096817   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.096848   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.097035   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097579   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097769   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097859   80857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:19.097915   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.098007   80857 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:19.098031   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.100775   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101108   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101160   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101185   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101412   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101595   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.101606   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101637   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101718   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.101789   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101853   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.101975   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.102092   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.102212   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.218596   80857 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:19.225675   80857 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:19.371453   80857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:19.381365   80857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:19.381438   80857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:19.397504   80857 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:19.397530   80857 start.go:495] detecting cgroup driver to use...
	I0717 18:40:19.397597   80857 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:19.412150   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:19.425495   80857 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:19.425578   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:19.438662   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:19.451953   80857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:19.578702   80857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:19.733328   80857 docker.go:233] disabling docker service ...
	I0717 18:40:19.733411   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:19.753615   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:19.774057   80857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:19.933901   80857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:20.049914   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:20.063500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:20.082560   80857 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 18:40:20.082611   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.092857   80857 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:20.092912   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.103283   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.112612   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.122671   80857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:20.132892   80857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:20.145445   80857 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:20.145501   80857 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:20.158958   80857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:20.168377   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:20.307224   80857 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:20.453407   80857 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:20.453490   80857 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:20.458007   80857 start.go:563] Will wait 60s for crictl version
	I0717 18:40:20.458062   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:20.461420   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:20.507358   80857 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:20.507426   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.542812   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.577280   80857 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 18:40:20.432028   80401 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.59597321s)
	I0717 18:40:20.432063   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.633854   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.728474   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.879989   80401 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:20.880079   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.380421   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.880208   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.912390   80401 api_server.go:72] duration metric: took 1.032400417s to wait for apiserver process to appear ...
	I0717 18:40:21.912419   80401 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:40:21.912443   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:21.912904   80401 api_server.go:269] stopped: https://192.168.72.216:8443/healthz: Get "https://192.168.72.216:8443/healthz": dial tcp 192.168.72.216:8443: connect: connection refused
	I0717 18:40:22.412598   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:20.397025   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting to get IP...
	I0717 18:40:20.398122   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.398525   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.398610   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.398506   81910 retry.go:31] will retry after 285.646022ms: waiting for machine to come up
	I0717 18:40:20.686556   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.687151   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.687263   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.687202   81910 retry.go:31] will retry after 239.996ms: waiting for machine to come up
	I0717 18:40:20.928604   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.929111   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.929139   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.929057   81910 retry.go:31] will retry after 487.674422ms: waiting for machine to come up
	I0717 18:40:21.418475   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.418928   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.418952   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:21.418872   81910 retry.go:31] will retry after 439.363216ms: waiting for machine to come up
	I0717 18:40:21.859546   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.860241   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.860273   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:21.860145   81910 retry.go:31] will retry after 598.922134ms: waiting for machine to come up
	I0717 18:40:22.461026   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:22.461509   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:22.461542   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:22.461457   81910 retry.go:31] will retry after 908.602286ms: waiting for machine to come up
	I0717 18:40:23.371582   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:23.372143   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:23.372170   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:23.372093   81910 retry.go:31] will retry after 893.690966ms: waiting for machine to come up
	I0717 18:40:24.267377   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:24.267908   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:24.267935   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:24.267873   81910 retry.go:31] will retry after 1.468061022s: waiting for machine to come up
	I0717 18:40:20.578679   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:20.581569   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.581933   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:20.581961   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.582197   80857 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:20.586047   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:20.598137   80857 kubeadm.go:883] updating cluster {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:20.598284   80857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:40:20.598355   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:20.646681   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:20.646757   80857 ssh_runner.go:195] Run: which lz4
	I0717 18:40:20.650691   80857 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:20.654703   80857 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:20.654730   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 18:40:22.163706   80857 crio.go:462] duration metric: took 1.513040695s to copy over tarball
	I0717 18:40:22.163783   80857 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:40:24.904256   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:24.904292   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:24.904308   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:24.971088   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:24.971120   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:24.971136   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.015832   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.015868   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:25.413309   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.418927   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.418955   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:25.913026   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.917375   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.917407   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:26.412566   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:26.419115   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:26.419140   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:26.912680   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:26.920245   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:26.920268   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:27.412854   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:27.417356   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:27.417390   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:27.912883   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:27.918242   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:27.918274   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:28.412591   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:28.419257   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:40:28.427814   80401 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:40:28.427842   80401 api_server.go:131] duration metric: took 6.515416451s to wait for apiserver health ...
	I0717 18:40:28.427854   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:40:28.427863   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:28.429828   80401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:40:28.431012   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:40:28.444822   80401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:40:28.465212   80401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:40:28.477639   80401 system_pods.go:59] 8 kube-system pods found
	I0717 18:40:28.477691   80401 system_pods.go:61] "coredns-5cfdc65f69-spj2w" [6849b651-9346-4d96-97a7-88eca7bbd50a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:40:28.477706   80401 system_pods.go:61] "etcd-no-preload-066175" [be012488-220b-421d-bf16-a3623fafb8fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:40:28.477721   80401 system_pods.go:61] "kube-apiserver-no-preload-066175" [4292a786-61f3-405d-8784-ec8a58e1b124] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:40:28.477731   80401 system_pods.go:61] "kube-controller-manager-no-preload-066175" [937a48f4-7fca-4cee-bb50-51f1720960da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:40:28.477739   80401 system_pods.go:61] "kube-proxy-tn5xn" [f0a910b3-98b6-470f-a5a2-e49369ecb733] Running
	I0717 18:40:28.477748   80401 system_pods.go:61] "kube-scheduler-no-preload-066175" [ffa2475c-7a5a-4988-89a2-4727e07356cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:40:28.477756   80401 system_pods.go:61] "metrics-server-78fcd8795b-mbtvd" [ccd7a565-52ef-49be-b659-31ae20af537a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:40:28.477761   80401 system_pods.go:61] "storage-provisioner" [19914ecc-2fcc-4cb8-bd78-fb6891dcf85d] Running
	I0717 18:40:28.477769   80401 system_pods.go:74] duration metric: took 12.536267ms to wait for pod list to return data ...
	I0717 18:40:28.477777   80401 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:40:28.482322   80401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:40:28.482348   80401 node_conditions.go:123] node cpu capacity is 2
	I0717 18:40:28.482368   80401 node_conditions.go:105] duration metric: took 4.585233ms to run NodePressure ...
	I0717 18:40:28.482387   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:28.768656   80401 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:40:28.773308   80401 kubeadm.go:739] kubelet initialised
	I0717 18:40:28.773330   80401 kubeadm.go:740] duration metric: took 4.654448ms waiting for restarted kubelet to initialise ...
	I0717 18:40:28.773338   80401 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:40:28.778778   80401 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:25.738071   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:25.738580   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:25.738611   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:25.738538   81910 retry.go:31] will retry after 1.505740804s: waiting for machine to come up
	I0717 18:40:27.246293   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:27.246651   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:27.246674   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:27.246606   81910 retry.go:31] will retry after 1.574253799s: waiting for machine to come up
	I0717 18:40:28.822159   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:28.822546   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:28.822597   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:28.822517   81910 retry.go:31] will retry after 2.132842884s: waiting for machine to come up
	I0717 18:40:25.307875   80857 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.144060111s)
	I0717 18:40:25.307903   80857 crio.go:469] duration metric: took 3.144169984s to extract the tarball
	I0717 18:40:25.307914   80857 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:40:25.354436   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:25.404799   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:25.404827   80857 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:40:25.404884   80857 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.404936   80857 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 18:40:25.404908   80857 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.404952   80857 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.404998   80857 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.405010   80857 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.406661   80857 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.406667   80857 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 18:40:25.406690   80857 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.407119   80857 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.619950   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 18:40:25.635075   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.641561   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.647362   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.648054   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.649684   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.664183   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.709163   80857 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 18:40:25.709227   80857 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 18:40:25.709275   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.760931   80857 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 18:40:25.760994   80857 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.761042   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.779324   80857 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 18:40:25.779378   80857 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.779429   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799052   80857 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 18:40:25.799097   80857 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.799106   80857 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 18:40:25.799131   80857 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 18:40:25.799190   80857 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.799233   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799136   80857 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.799148   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799298   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.806973   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 18:40:25.807041   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.807066   80857 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 18:40:25.807095   80857 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.807126   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.807137   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.807237   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.811025   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.811114   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.935792   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 18:40:25.935853   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 18:40:25.935863   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 18:40:25.935934   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.935973   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 18:40:25.935996   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 18:40:25.940351   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 18:40:25.970107   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 18:40:26.231894   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:26.372230   80857 cache_images.go:92] duration metric: took 967.383323ms to LoadCachedImages
	W0717 18:40:26.372327   80857 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0717 18:40:26.372346   80857 kubeadm.go:934] updating node { 192.168.39.128 8443 v1.20.0 crio true true} ...
	I0717 18:40:26.372517   80857 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-019549 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:26.372613   80857 ssh_runner.go:195] Run: crio config
	I0717 18:40:26.416155   80857 cni.go:84] Creating CNI manager for ""
	I0717 18:40:26.416181   80857 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:26.416196   80857 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:26.416229   80857 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-019549 NodeName:old-k8s-version-019549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 18:40:26.416526   80857 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-019549"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:26.416595   80857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 18:40:26.426941   80857 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:26.427006   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:26.437810   80857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 18:40:26.460046   80857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:40:26.482521   80857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 18:40:26.502536   80857 ssh_runner.go:195] Run: grep 192.168.39.128	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:26.506513   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:26.520895   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:26.648931   80857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:26.665278   80857 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549 for IP: 192.168.39.128
	I0717 18:40:26.665300   80857 certs.go:194] generating shared ca certs ...
	I0717 18:40:26.665329   80857 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:26.665508   80857 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:26.665561   80857 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:26.665574   80857 certs.go:256] generating profile certs ...
	I0717 18:40:26.665693   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/client.key
	I0717 18:40:26.665780   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key.9c9b0a7e
	I0717 18:40:26.665836   80857 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key
	I0717 18:40:26.665998   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:26.666049   80857 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:26.666063   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:26.666095   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:26.666128   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:26.666167   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:26.666225   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:26.667047   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:26.713984   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:26.742617   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:26.770441   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:26.795098   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 18:40:26.825038   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:26.861300   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:26.901664   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:40:26.926357   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:26.948986   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:26.973248   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:26.994642   80857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:27.010158   80857 ssh_runner.go:195] Run: openssl version
	I0717 18:40:27.015861   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:27.026221   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030496   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030567   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.035862   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:27.046312   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:27.057117   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061775   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061824   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.067535   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:27.079022   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:27.090009   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094688   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094768   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.100404   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:27.110653   80857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:27.115117   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:27.120633   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:27.126070   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:27.131500   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:27.137035   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:27.142426   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:27.147638   80857 kubeadm.go:392] StartCluster: {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:27.147756   80857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:27.147816   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.187433   80857 cri.go:89] found id: ""
	I0717 18:40:27.187498   80857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:27.197001   80857 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:27.197020   80857 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:27.197070   80857 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:27.206758   80857 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:27.207822   80857 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-019549" does not appear in /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:40:27.208505   80857 kubeconfig.go:62] /home/jenkins/minikube-integration/19283-14386/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-019549" cluster setting kubeconfig missing "old-k8s-version-019549" context setting]
	I0717 18:40:27.209497   80857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:27.212786   80857 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:27.222612   80857 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.128
	I0717 18:40:27.222649   80857 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:27.222663   80857 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:27.222721   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.268127   80857 cri.go:89] found id: ""
	I0717 18:40:27.268205   80857 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:27.284334   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:27.293669   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:27.293691   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:27.293743   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:40:27.305348   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:27.305437   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:27.317749   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:40:27.328481   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:27.328547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:27.337574   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.346242   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:27.346299   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.354946   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:40:27.363296   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:27.363350   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:27.371925   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:27.384020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:27.571539   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:28.767574   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.19599736s)
	I0717 18:40:28.767612   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.011512   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.151980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.258796   80857 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:29.258886   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:29.759072   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.787614   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:33.285208   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:30.956634   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:30.957109   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:30.957140   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:30.957059   81910 retry.go:31] will retry after 3.31337478s: waiting for machine to come up
	I0717 18:40:34.272528   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:34.273063   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:34.273094   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:34.273032   81910 retry.go:31] will retry after 3.207729964s: waiting for machine to come up
	I0717 18:40:30.259921   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.758948   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.258967   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.759872   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.259187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.759299   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.259080   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.759583   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.259740   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.759068   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.697183   80180 start.go:364] duration metric: took 48.129837953s to acquireMachinesLock for "embed-certs-527415"
	I0717 18:40:38.697248   80180 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:38.697260   80180 fix.go:54] fixHost starting: 
	I0717 18:40:38.697680   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:38.697712   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:38.713575   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0717 18:40:38.713926   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:38.714396   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:40:38.714422   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:38.714762   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:38.714949   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:38.715109   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:40:38.716552   80180 fix.go:112] recreateIfNeeded on embed-certs-527415: state=Stopped err=<nil>
	I0717 18:40:38.716574   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	W0717 18:40:38.716775   80180 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:38.718610   80180 out.go:177] * Restarting existing kvm2 VM for "embed-certs-527415" ...
	I0717 18:40:35.285888   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:36.285651   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:36.285676   80401 pod_ready.go:81] duration metric: took 7.506876819s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.285686   80401 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.292615   80401 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:36.292638   80401 pod_ready.go:81] duration metric: took 6.944487ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.292650   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:38.298338   80401 pod_ready.go:102] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:37.484312   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.484723   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has current primary IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.484740   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Found IP for machine: 192.168.50.245
	I0717 18:40:37.484753   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Reserving static IP address...
	I0717 18:40:37.485137   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-022930", mac: "52:54:00:5d:76:ae", ip: "192.168.50.245"} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.485161   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Reserved static IP address: 192.168.50.245
	I0717 18:40:37.485174   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | skip adding static IP to network mk-default-k8s-diff-port-022930 - found existing host DHCP lease matching {name: "default-k8s-diff-port-022930", mac: "52:54:00:5d:76:ae", ip: "192.168.50.245"}
	I0717 18:40:37.485191   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Getting to WaitForSSH function...
	I0717 18:40:37.485207   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for SSH to be available...
	I0717 18:40:37.487397   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.487767   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.487796   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.487899   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Using SSH client type: external
	I0717 18:40:37.487927   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa (-rw-------)
	I0717 18:40:37.487961   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:37.487973   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | About to run SSH command:
	I0717 18:40:37.487992   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | exit 0
	I0717 18:40:37.608746   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:37.609085   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetConfigRaw
	I0717 18:40:37.609739   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:37.612293   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.612668   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.612689   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.612936   81068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/config.json ...
	I0717 18:40:37.613176   81068 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:37.613194   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:37.613391   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.615483   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.615774   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.615804   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.615881   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.616038   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.616187   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.616306   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.616470   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.616676   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.616691   81068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:37.720971   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:37.721004   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.721307   81068 buildroot.go:166] provisioning hostname "default-k8s-diff-port-022930"
	I0717 18:40:37.721340   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.721654   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.724162   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.724507   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.724535   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.724712   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.724912   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.725090   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.725259   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.725430   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.725635   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.725651   81068 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-022930 && echo "default-k8s-diff-port-022930" | sudo tee /etc/hostname
	I0717 18:40:37.837366   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-022930
	
	I0717 18:40:37.837389   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.839920   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.840291   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.840325   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.840450   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.840654   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.840830   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.840970   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.841130   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.841344   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.841363   81068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-022930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-022930/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-022930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:37.948311   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:37.948343   81068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:37.948394   81068 buildroot.go:174] setting up certificates
	I0717 18:40:37.948406   81068 provision.go:84] configureAuth start
	I0717 18:40:37.948416   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.948732   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:37.951214   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.951548   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.951578   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.951693   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.953805   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.954086   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.954105   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.954250   81068 provision.go:143] copyHostCerts
	I0717 18:40:37.954318   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:37.954334   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:37.954401   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:37.954531   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:37.954542   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:37.954575   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:37.954657   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:37.954667   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:37.954694   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:37.954758   81068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-022930 san=[127.0.0.1 192.168.50.245 default-k8s-diff-port-022930 localhost minikube]
	I0717 18:40:38.054084   81068 provision.go:177] copyRemoteCerts
	I0717 18:40:38.054136   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:38.054160   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.056841   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.057265   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.057300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.057483   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.057683   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.057839   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.057982   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.138206   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:38.163105   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 18:40:38.188449   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:38.214829   81068 provision.go:87] duration metric: took 266.409028ms to configureAuth
	I0717 18:40:38.214853   81068 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:38.215005   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:40:38.215068   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.217684   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.218010   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.218037   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.218247   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.218419   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.218573   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.218706   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.218874   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:38.219021   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:38.219039   81068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:38.471162   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:38.471191   81068 machine.go:97] duration metric: took 858.000457ms to provisionDockerMachine
	I0717 18:40:38.471206   81068 start.go:293] postStartSetup for "default-k8s-diff-port-022930" (driver="kvm2")
	I0717 18:40:38.471220   81068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:38.471247   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.471558   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:38.471590   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.474241   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.474673   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.474704   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.474868   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.475085   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.475245   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.475524   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.554800   81068 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:38.558601   81068 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:38.558624   81068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:38.558685   81068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:38.558769   81068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:38.558875   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:38.567664   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:38.589713   81068 start.go:296] duration metric: took 118.491854ms for postStartSetup
	I0717 18:40:38.589754   81068 fix.go:56] duration metric: took 19.496049651s for fixHost
	I0717 18:40:38.589777   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.592433   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.592813   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.592860   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.592989   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.593188   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.593368   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.593536   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.593738   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:38.593937   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:38.593955   81068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:38.697050   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241638.669121206
	
	I0717 18:40:38.697075   81068 fix.go:216] guest clock: 1721241638.669121206
	I0717 18:40:38.697085   81068 fix.go:229] Guest: 2024-07-17 18:40:38.669121206 +0000 UTC Remote: 2024-07-17 18:40:38.589759024 +0000 UTC m=+204.149894792 (delta=79.362182ms)
	I0717 18:40:38.697108   81068 fix.go:200] guest clock delta is within tolerance: 79.362182ms
	I0717 18:40:38.697118   81068 start.go:83] releasing machines lock for "default-k8s-diff-port-022930", held for 19.603450588s
	I0717 18:40:38.697143   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.697381   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:38.700059   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.700504   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.700529   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.700764   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701246   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701541   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701619   81068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:38.701672   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.701777   81068 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:38.701797   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.704169   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704478   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.704503   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704657   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704684   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.704849   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.705002   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.705164   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.705262   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.705300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.705496   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.705663   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.705817   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.705967   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.825607   81068 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:38.831484   81068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:38.972775   81068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:38.978446   81068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:38.978502   81068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:38.999160   81068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:38.999180   81068 start.go:495] detecting cgroup driver to use...
	I0717 18:40:38.999234   81068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:39.016133   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:39.029031   81068 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:39.029083   81068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:39.042835   81068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:39.056981   81068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:39.168521   81068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:39.306630   81068 docker.go:233] disabling docker service ...
	I0717 18:40:39.306704   81068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:39.320435   81068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:39.337780   81068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:35.259643   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:35.759432   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.259818   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.759627   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.259968   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.758933   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.259980   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.759776   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.259988   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.496847   81068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:39.627783   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:39.641684   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:39.659183   81068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:40:39.659250   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.669034   81068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:39.669100   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.678708   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.688822   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.699484   81068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:39.709505   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.720715   81068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.736510   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.746991   81068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:39.757265   81068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:39.757320   81068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:39.774777   81068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:39.789593   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:39.907377   81068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:40.039498   81068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:40.039592   81068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:40.044502   81068 start.go:563] Will wait 60s for crictl version
	I0717 18:40:40.044558   81068 ssh_runner.go:195] Run: which crictl
	I0717 18:40:40.048708   81068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:40.087738   81068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:40.087822   81068 ssh_runner.go:195] Run: crio --version
	I0717 18:40:40.115460   81068 ssh_runner.go:195] Run: crio --version
	I0717 18:40:40.150181   81068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:40:38.719828   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Start
	I0717 18:40:38.720004   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring networks are active...
	I0717 18:40:38.720983   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring network default is active
	I0717 18:40:38.721537   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring network mk-embed-certs-527415 is active
	I0717 18:40:38.721945   80180 main.go:141] libmachine: (embed-certs-527415) Getting domain xml...
	I0717 18:40:38.722654   80180 main.go:141] libmachine: (embed-certs-527415) Creating domain...
	I0717 18:40:40.007036   80180 main.go:141] libmachine: (embed-certs-527415) Waiting to get IP...
	I0717 18:40:40.007975   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.008511   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.008608   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.008495   82069 retry.go:31] will retry after 268.334211ms: waiting for machine to come up
	I0717 18:40:40.278129   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.278639   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.278670   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.278585   82069 retry.go:31] will retry after 350.00147ms: waiting for machine to come up
	I0717 18:40:40.630229   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.630819   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.630853   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.630768   82069 retry.go:31] will retry after 411.079615ms: waiting for machine to come up
	I0717 18:40:41.043232   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.043851   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.043880   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.043822   82069 retry.go:31] will retry after 387.726284ms: waiting for machine to come up
	I0717 18:40:41.433536   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.434058   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.434092   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.434005   82069 retry.go:31] will retry after 538.564385ms: waiting for machine to come up
	I0717 18:40:41.973917   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.974457   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.974489   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.974395   82069 retry.go:31] will retry after 778.576616ms: waiting for machine to come up
	I0717 18:40:42.754322   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:42.754872   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:42.754899   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:42.754837   82069 retry.go:31] will retry after 758.957234ms: waiting for machine to come up
	I0717 18:40:40.299673   80401 pod_ready.go:102] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:40.801297   80401 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.801325   80401 pod_ready.go:81] duration metric: took 4.508666316s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.801339   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.807354   80401 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.807372   80401 pod_ready.go:81] duration metric: took 6.024916ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.807380   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.812934   80401 pod_ready.go:92] pod "kube-proxy-tn5xn" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.812982   80401 pod_ready.go:81] duration metric: took 5.594378ms for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.812996   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.817940   80401 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.817969   80401 pod_ready.go:81] duration metric: took 4.96427ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.817982   80401 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:42.825018   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:40.151220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:40.153791   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:40.154220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:40.154246   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:40.154472   81068 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:40.159310   81068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:40.172121   81068 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:40.172256   81068 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:40:40.172307   81068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:40.215863   81068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:40:40.215940   81068 ssh_runner.go:195] Run: which lz4
	I0717 18:40:40.220502   81068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:40.224682   81068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:40.224714   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:40:41.511505   81068 crio.go:462] duration metric: took 1.291039238s to copy over tarball
	I0717 18:40:41.511574   81068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:40:43.730839   81068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.219230444s)
	I0717 18:40:43.730901   81068 crio.go:469] duration metric: took 2.219370372s to extract the tarball
	I0717 18:40:43.730912   81068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:40:43.767876   81068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:43.809466   81068 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:40:43.809494   81068 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:40:43.809505   81068 kubeadm.go:934] updating node { 192.168.50.245 8444 v1.30.2 crio true true} ...
	I0717 18:40:43.809646   81068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-022930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:43.809740   81068 ssh_runner.go:195] Run: crio config
	I0717 18:40:43.850614   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:40:43.850635   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:43.850648   81068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:43.850669   81068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-022930 NodeName:default-k8s-diff-port-022930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:40:43.850795   81068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-022930"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:43.850851   81068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:40:43.862674   81068 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:43.862733   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:43.873304   81068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 18:40:43.888884   81068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:40:43.903631   81068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 18:40:43.918768   81068 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:43.922033   81068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:43.932546   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:44.049621   81068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:44.065718   81068 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930 for IP: 192.168.50.245
	I0717 18:40:44.065747   81068 certs.go:194] generating shared ca certs ...
	I0717 18:40:44.065767   81068 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:44.065939   81068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:44.065999   81068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:44.066016   81068 certs.go:256] generating profile certs ...
	I0717 18:40:44.066149   81068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/client.key
	I0717 18:40:44.066224   81068 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.key.8aa7f0a0
	I0717 18:40:44.066284   81068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.key
	I0717 18:40:44.066445   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:44.066494   81068 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:44.066507   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:44.066548   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:44.066579   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:44.066606   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:44.066650   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:44.067421   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:44.104160   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:44.133716   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:44.161170   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:44.190489   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 18:40:44.211792   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:44.232875   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:44.255059   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:40:44.276826   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:44.298357   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:44.320634   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:44.345428   81068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:44.362934   81068 ssh_runner.go:195] Run: openssl version
	I0717 18:40:44.369764   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:44.382557   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.386445   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.386483   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.392033   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:44.401987   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:44.411437   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.415367   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.415419   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.420523   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:44.429915   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:44.439371   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.443248   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.443301   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.448380   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:44.457828   81068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:44.462151   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:44.467474   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:44.472829   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:40.259910   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:40.759917   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.259718   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.759839   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.259129   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.759772   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.259989   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.759724   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.258978   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.759594   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.515097   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:43.515595   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:43.515616   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:43.515539   82069 retry.go:31] will retry after 1.173590835s: waiting for machine to come up
	I0717 18:40:44.691027   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:44.691479   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:44.691520   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:44.691428   82069 retry.go:31] will retry after 1.594704966s: waiting for machine to come up
	I0717 18:40:46.288022   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:46.288609   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:46.288642   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:46.288549   82069 retry.go:31] will retry after 2.014912325s: waiting for machine to come up
	I0717 18:40:45.323815   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:47.324715   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:44.478397   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:44.483860   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:44.489029   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:44.494220   81068 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:44.494329   81068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:44.494381   81068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:44.534380   81068 cri.go:89] found id: ""
	I0717 18:40:44.534445   81068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:44.545270   81068 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:44.545287   81068 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:44.545328   81068 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:44.555521   81068 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:44.556584   81068 kubeconfig.go:125] found "default-k8s-diff-port-022930" server: "https://192.168.50.245:8444"
	I0717 18:40:44.558675   81068 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:44.567696   81068 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.245
	I0717 18:40:44.567727   81068 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:44.567739   81068 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:44.567787   81068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:44.605757   81068 cri.go:89] found id: ""
	I0717 18:40:44.605833   81068 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:44.622187   81068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:44.631169   81068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:44.631191   81068 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:44.631241   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 18:40:44.639194   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:44.639248   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:44.647542   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 18:40:44.655622   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:44.655708   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:44.663923   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 18:40:44.671733   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:44.671778   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:44.680375   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 18:40:44.688043   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:44.688085   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:44.697020   81068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:44.705554   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:44.812051   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.351683   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.559471   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.618086   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.678836   81068 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:45.678926   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.179998   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.679083   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.179084   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.679042   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.179150   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.195192   81068 api_server.go:72] duration metric: took 2.516354411s to wait for apiserver process to appear ...
	I0717 18:40:48.195222   81068 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:40:48.195247   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:45.259185   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:45.759765   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.259009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.759131   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.259477   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.759386   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.259977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.759374   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.259744   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.759440   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.393650   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:50.393688   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:50.393705   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:50.467974   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:50.468000   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:50.696340   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:50.702264   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:50.702308   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:51.195503   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:51.200034   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:51.200060   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:51.695594   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:51.699593   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 200:
	ok
	I0717 18:40:51.706025   81068 api_server.go:141] control plane version: v1.30.2
	I0717 18:40:51.706048   81068 api_server.go:131] duration metric: took 3.510818337s to wait for apiserver health ...
	I0717 18:40:51.706059   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:40:51.706067   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:51.707696   81068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:40:48.305798   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:48.306290   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:48.306323   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:48.306232   82069 retry.go:31] will retry after 1.789943402s: waiting for machine to come up
	I0717 18:40:50.098279   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:50.098771   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:50.098798   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:50.098734   82069 retry.go:31] will retry after 2.765766483s: waiting for machine to come up
	I0717 18:40:52.867667   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:52.868191   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:52.868212   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:52.868139   82069 retry.go:31] will retry after 2.762670644s: waiting for machine to come up
	I0717 18:40:49.325415   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:51.824015   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:53.824980   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:51.708887   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:40:51.718704   81068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:40:51.735711   81068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:40:51.745976   81068 system_pods.go:59] 8 kube-system pods found
	I0717 18:40:51.746009   81068 system_pods.go:61] "coredns-7db6d8ff4d-czk4x" [80cedf0b-248a-458e-994c-81f852d78076] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:40:51.746022   81068 system_pods.go:61] "etcd-default-k8s-diff-port-022930" [f9cf97bf-5fdc-4623-a78c-d29e0352ce40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:40:51.746036   81068 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-022930" [599cef4d-2b4d-4cd5-9552-99de585759eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:40:51.746051   81068 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-022930" [89092470-6fc9-47b2-b680-7c93945d9005] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:40:51.746062   81068 system_pods.go:61] "kube-proxy-hj7ss" [d260f18e-7a01-4f07-8c6a-87e8f6329f79] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 18:40:51.746074   81068 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-022930" [fe098478-fcb6-4084-b773-11c2cbb995aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:40:51.746083   81068 system_pods.go:61] "metrics-server-569cc877fc-j9qhx" [18efb008-e7d3-435e-9156-57c16b454d07] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:40:51.746093   81068 system_pods.go:61] "storage-provisioner" [ac856758-62ca-485f-aa31-5cd1c7d1dbe5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:40:51.746103   81068 system_pods.go:74] duration metric: took 10.373616ms to wait for pod list to return data ...
	I0717 18:40:51.746115   81068 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:40:51.749151   81068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:40:51.749173   81068 node_conditions.go:123] node cpu capacity is 2
	I0717 18:40:51.749185   81068 node_conditions.go:105] duration metric: took 3.061813ms to run NodePressure ...
	I0717 18:40:51.749204   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:52.049486   81068 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:40:52.053636   81068 kubeadm.go:739] kubelet initialised
	I0717 18:40:52.053656   81068 kubeadm.go:740] duration metric: took 4.136528ms waiting for restarted kubelet to initialise ...
	I0717 18:40:52.053665   81068 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:40:52.058401   81068 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.062406   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.062429   81068 pod_ready.go:81] duration metric: took 4.007504ms for pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.062439   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.062454   81068 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.066161   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.066185   81068 pod_ready.go:81] duration metric: took 3.717781ms for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.066202   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.066212   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.070043   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.070064   81068 pod_ready.go:81] duration metric: took 3.840533ms for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.070074   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.070080   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:54.077110   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:50.258977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.259867   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.759826   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.259016   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.759708   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.259589   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.759788   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.259753   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.759841   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.633531   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.633999   80180 main.go:141] libmachine: (embed-certs-527415) Found IP for machine: 192.168.61.90
	I0717 18:40:55.634014   80180 main.go:141] libmachine: (embed-certs-527415) Reserving static IP address...
	I0717 18:40:55.634026   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has current primary IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.634407   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.634438   80180 main.go:141] libmachine: (embed-certs-527415) Reserved static IP address: 192.168.61.90
	I0717 18:40:55.634456   80180 main.go:141] libmachine: (embed-certs-527415) DBG | skip adding static IP to network mk-embed-certs-527415 - found existing host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"}
	I0717 18:40:55.634476   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Getting to WaitForSSH function...
	I0717 18:40:55.634490   80180 main.go:141] libmachine: (embed-certs-527415) Waiting for SSH to be available...
	I0717 18:40:55.636604   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.636877   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.636904   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.637010   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH client type: external
	I0717 18:40:55.637032   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa (-rw-------)
	I0717 18:40:55.637063   80180 main.go:141] libmachine: (embed-certs-527415) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:55.637082   80180 main.go:141] libmachine: (embed-certs-527415) DBG | About to run SSH command:
	I0717 18:40:55.637094   80180 main.go:141] libmachine: (embed-certs-527415) DBG | exit 0
	I0717 18:40:55.765208   80180 main.go:141] libmachine: (embed-certs-527415) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:55.765554   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetConfigRaw
	I0717 18:40:55.766322   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:55.769331   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.769800   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.769827   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.770203   80180 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/config.json ...
	I0717 18:40:55.770593   80180 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:55.770620   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:55.770826   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:55.773837   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.774313   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.774346   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.774553   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:55.774750   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.774909   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.775060   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:55.775277   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:55.775534   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:55.775556   80180 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:55.888982   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:55.889013   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:55.889259   80180 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:40:55.889286   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:55.889501   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:55.891900   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.892284   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.892302   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.892532   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:55.892701   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.892853   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.892993   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:55.893136   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:55.893293   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:55.893310   80180 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-527415 && echo "embed-certs-527415" | sudo tee /etc/hostname
	I0717 18:40:56.018869   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-527415
	
	I0717 18:40:56.018898   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.021591   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.021888   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.021909   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.022286   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.022489   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.022646   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.022765   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.022905   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.023050   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.023066   80180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-527415' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-527415/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-527415' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:56.146411   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:56.146455   80180 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:56.146478   80180 buildroot.go:174] setting up certificates
	I0717 18:40:56.146490   80180 provision.go:84] configureAuth start
	I0717 18:40:56.146502   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:56.146767   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:56.149369   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.149725   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.149755   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.149937   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.152431   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.152753   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.152774   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.152936   80180 provision.go:143] copyHostCerts
	I0717 18:40:56.153028   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:56.153041   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:56.153096   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:56.153186   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:56.153194   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:56.153214   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:56.153277   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:56.153283   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:56.153300   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:56.153349   80180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.embed-certs-527415 san=[127.0.0.1 192.168.61.90 embed-certs-527415 localhost minikube]
	I0717 18:40:56.326978   80180 provision.go:177] copyRemoteCerts
	I0717 18:40:56.327024   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:56.327045   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.329432   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.329778   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.329809   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.329927   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.330121   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.330295   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.330409   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.415173   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:56.438501   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 18:40:56.460520   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:56.481808   80180 provision.go:87] duration metric: took 335.305142ms to configureAuth
	I0717 18:40:56.481832   80180 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:56.482001   80180 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:40:56.482063   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.484653   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.485044   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.485074   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.485222   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.485468   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.485652   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.485810   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.485953   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.486108   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.486123   80180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:56.741135   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:56.741185   80180 machine.go:97] duration metric: took 970.573336ms to provisionDockerMachine
	I0717 18:40:56.741204   80180 start.go:293] postStartSetup for "embed-certs-527415" (driver="kvm2")
	I0717 18:40:56.741221   80180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:56.741245   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.741597   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:56.741625   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.744356   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.744805   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.744831   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.745025   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.745224   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.745382   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.745549   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.835435   80180 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:56.839724   80180 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:56.839753   80180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:56.839834   80180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:56.839945   80180 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:56.840083   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:56.849582   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:56.872278   80180 start.go:296] duration metric: took 131.057656ms for postStartSetup
	I0717 18:40:56.872347   80180 fix.go:56] duration metric: took 18.175085798s for fixHost
	I0717 18:40:56.872375   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.874969   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.875308   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.875340   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.875533   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.875722   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.875955   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.876089   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.876274   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.876459   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.876469   80180 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:56.985888   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241656.959508652
	
	I0717 18:40:56.985907   80180 fix.go:216] guest clock: 1721241656.959508652
	I0717 18:40:56.985914   80180 fix.go:229] Guest: 2024-07-17 18:40:56.959508652 +0000 UTC Remote: 2024-07-17 18:40:56.872354453 +0000 UTC m=+348.896679896 (delta=87.154199ms)
	I0717 18:40:56.985939   80180 fix.go:200] guest clock delta is within tolerance: 87.154199ms
	I0717 18:40:56.985944   80180 start.go:83] releasing machines lock for "embed-certs-527415", held for 18.288718042s
	I0717 18:40:56.985964   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.986210   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:56.988716   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.989086   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.989114   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.989279   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.989786   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.989966   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.990055   80180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:56.990092   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.990360   80180 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:56.990390   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.992519   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992816   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.992835   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992852   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992984   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.993162   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.993212   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.993234   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.993356   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.993401   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.993499   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.993541   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.993754   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.993915   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:57.116598   80180 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:57.122546   80180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:57.268379   80180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:57.274748   80180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:57.274819   80180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:57.290374   80180 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:57.290394   80180 start.go:495] detecting cgroup driver to use...
	I0717 18:40:57.290443   80180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:57.307521   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:57.323478   80180 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:57.323554   80180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:57.337078   80180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:57.350181   80180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:57.463512   80180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:57.626650   80180 docker.go:233] disabling docker service ...
	I0717 18:40:57.626714   80180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:57.641067   80180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:57.655085   80180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:57.802789   80180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:57.919140   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:57.932620   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:57.949471   80180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:40:57.949528   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.960297   80180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:57.960366   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.970890   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.980768   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.990723   80180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:58.000791   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.010332   80180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.026611   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.036106   80180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:58.044742   80180 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:58.044791   80180 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:58.056584   80180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:58.065470   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:58.182119   80180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:58.319330   80180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:58.319400   80180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:58.326361   80180 start.go:563] Will wait 60s for crictl version
	I0717 18:40:58.326405   80180 ssh_runner.go:195] Run: which crictl
	I0717 18:40:58.329951   80180 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:58.366561   80180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:58.366668   80180 ssh_runner.go:195] Run: crio --version
	I0717 18:40:58.398483   80180 ssh_runner.go:195] Run: crio --version
	I0717 18:40:58.427421   80180 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:40:56.324834   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:58.325283   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:56.077315   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:58.077815   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:55.259450   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.759932   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.259395   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.759855   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.259739   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.759436   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.258951   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.759931   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.259588   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.759651   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.428872   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:58.431182   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:58.431554   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:58.431580   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:58.431756   80180 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:58.435914   80180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:58.448777   80180 kubeadm.go:883] updating cluster {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:58.448923   80180 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:40:58.449018   80180 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:58.488011   80180 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:40:58.488077   80180 ssh_runner.go:195] Run: which lz4
	I0717 18:40:58.491828   80180 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:58.495609   80180 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:58.495640   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:40:59.686445   80180 crio.go:462] duration metric: took 1.194619366s to copy over tarball
	I0717 18:40:59.686513   80180 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:41:01.862679   80180 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.176132338s)
	I0717 18:41:01.862710   80180 crio.go:469] duration metric: took 2.176236509s to extract the tarball
	I0717 18:41:01.862719   80180 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:41:01.901813   80180 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:41:01.945403   80180 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:41:01.945429   80180 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:41:01.945438   80180 kubeadm.go:934] updating node { 192.168.61.90 8443 v1.30.2 crio true true} ...
	I0717 18:41:01.945554   80180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-527415 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:41:01.945631   80180 ssh_runner.go:195] Run: crio config
	I0717 18:41:01.991102   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:41:01.991130   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:41:01.991144   80180 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:41:01.991168   80180 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.90 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-527415 NodeName:embed-certs-527415 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:41:01.991331   80180 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-527415"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:41:01.991397   80180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:41:02.001007   80180 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:41:02.001082   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:41:02.010130   80180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0717 18:41:02.025405   80180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:41:02.041167   80180 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0717 18:41:02.057441   80180 ssh_runner.go:195] Run: grep 192.168.61.90	control-plane.minikube.internal$ /etc/hosts
	I0717 18:41:02.060878   80180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:41:02.072984   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:41:02.188194   80180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:41:02.204599   80180 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415 for IP: 192.168.61.90
	I0717 18:41:02.204623   80180 certs.go:194] generating shared ca certs ...
	I0717 18:41:02.204643   80180 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:41:02.204822   80180 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:41:02.204885   80180 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:41:02.204899   80180 certs.go:256] generating profile certs ...
	I0717 18:41:02.205047   80180 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.key
	I0717 18:41:02.205129   80180 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9
	I0717 18:41:02.205188   80180 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key
	I0717 18:41:02.205372   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:41:02.205436   80180 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:41:02.205451   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:41:02.205486   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:41:02.205526   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:41:02.205556   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:41:02.205612   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:41:02.206441   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:41:02.234135   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:41:02.259780   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:41:02.285464   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:41:02.316267   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 18:41:02.348835   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:41:02.375505   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:41:02.402683   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:41:02.426689   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:41:02.449328   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:41:02.472140   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:41:02.494016   80180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:41:02.512612   80180 ssh_runner.go:195] Run: openssl version
	I0717 18:41:02.519908   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:41:02.532706   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.538136   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.538191   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.545493   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:41:02.558832   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:41:02.570455   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.575515   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.575582   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.581428   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:41:02.592439   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:41:02.602823   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.608370   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.608433   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.615367   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:41:02.628355   80180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:41:02.632772   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:41:02.638325   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:41:02.643635   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:41:02.648960   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:41:02.654088   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:41:02.659220   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:41:02.664325   80180 kubeadm.go:392] StartCluster: {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:41:02.664444   80180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:41:02.664495   80180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:41:02.699590   80180 cri.go:89] found id: ""
	I0717 18:41:02.699676   80180 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:41:02.709427   80180 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:41:02.709452   80180 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:41:02.709503   80180 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:41:02.718489   80180 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:41:02.719505   80180 kubeconfig.go:125] found "embed-certs-527415" server: "https://192.168.61.90:8443"
	I0717 18:41:02.721457   80180 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:41:02.730258   80180 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.90
	I0717 18:41:02.730288   80180 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:41:02.730301   80180 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:41:02.730367   80180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:41:02.768268   80180 cri.go:89] found id: ""
	I0717 18:41:02.768339   80180 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:41:02.786699   80180 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:41:02.796888   80180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:41:02.796912   80180 kubeadm.go:157] found existing configuration files:
	
	I0717 18:41:02.796965   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:41:02.805633   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:41:02.805703   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:41:02.817624   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:41:02.827840   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:41:02.827902   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:41:02.836207   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:41:02.844201   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:41:02.844265   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:41:02.852667   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:41:02.860697   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:41:02.860741   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:41:02.869133   80180 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:41:02.877992   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:02.986350   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:00.823447   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:02.825375   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:00.578095   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:02.576899   81068 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.576927   81068 pod_ready.go:81] duration metric: took 10.506835962s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.576953   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hj7ss" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.584912   81068 pod_ready.go:92] pod "kube-proxy-hj7ss" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.584933   81068 pod_ready.go:81] duration metric: took 7.972079ms for pod "kube-proxy-hj7ss" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.584964   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.590342   81068 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.590366   81068 pod_ready.go:81] duration metric: took 5.392364ms for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.590380   81068 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:00.259461   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:00.759148   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.259596   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.759943   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.259670   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.759900   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.259745   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.759843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.259902   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.759850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.874112   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.091026   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.170734   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.292719   80180 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:41:04.292826   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.793710   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.292924   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.792872   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.293626   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.793632   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.810658   80180 api_server.go:72] duration metric: took 2.517938682s to wait for apiserver process to appear ...
	I0717 18:41:06.810685   80180 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:41:06.810705   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:05.323684   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:07.324653   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:04.596794   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:06.597411   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:09.097409   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:05.259624   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.759258   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.259346   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.759041   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.259467   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.759164   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.259047   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.759959   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.259372   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.759259   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.612683   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:41:09.612715   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:41:09.612728   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:09.633949   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:41:09.633975   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:41:09.811272   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:09.815690   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:09.815720   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:10.311256   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:10.319587   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:10.319620   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:10.811133   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:10.815819   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:10.815862   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:11.311037   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:11.315892   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:11.315923   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:11.811534   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:11.816601   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:11.816631   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:12.311178   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:12.315484   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:12.315510   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:12.811068   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:12.821016   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:12.821048   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:13.311166   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:13.315879   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:41:13.322661   80180 api_server.go:141] control plane version: v1.30.2
	I0717 18:41:13.322700   80180 api_server.go:131] duration metric: took 6.512007091s to wait for apiserver health ...
	I0717 18:41:13.322713   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:41:13.322722   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:41:13.324516   80180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:41:09.325535   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:11.325697   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:13.327238   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:11.597479   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:14.098908   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:10.259845   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:10.759671   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.259895   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.759877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.259003   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.759685   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.759844   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.259541   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.759709   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.325935   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:41:13.337601   80180 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:41:13.354366   80180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:41:13.364678   80180 system_pods.go:59] 8 kube-system pods found
	I0717 18:41:13.364715   80180 system_pods.go:61] "coredns-7db6d8ff4d-2fnlb" [86d50e9b-fb88-4332-90c5-a969b0654635] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:41:13.364726   80180 system_pods.go:61] "etcd-embed-certs-527415" [9d8ac0a8-4639-48d8-8ac4-88b0bd1e2082] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:41:13.364735   80180 system_pods.go:61] "kube-apiserver-embed-certs-527415" [7f72c4f9-f1db-4ac6-83e1-2b94245107c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:41:13.364743   80180 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [96081a97-2a90-4fec-84cb-9a399a43aeb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:41:13.364752   80180 system_pods.go:61] "kube-proxy-jltfs" [27f6259e-80cc-4881-bb06-6a2ad529179c] Running
	I0717 18:41:13.364763   80180 system_pods.go:61] "kube-scheduler-embed-certs-527415" [bed7b515-7ab0-460c-a13f-037f29576f30] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:41:13.364775   80180 system_pods.go:61] "metrics-server-569cc877fc-8md44" [1b9d50c8-6ca0-41c3-92d9-eebdccbf1a82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:41:13.364783   80180 system_pods.go:61] "storage-provisioner" [ccb34b69-d28d-477e-8c7a-0acdc547bec7] Running
	I0717 18:41:13.364791   80180 system_pods.go:74] duration metric: took 10.40947ms to wait for pod list to return data ...
	I0717 18:41:13.364803   80180 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:41:13.367687   80180 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:41:13.367712   80180 node_conditions.go:123] node cpu capacity is 2
	I0717 18:41:13.367725   80180 node_conditions.go:105] duration metric: took 2.912986ms to run NodePressure ...
	I0717 18:41:13.367745   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:13.630827   80180 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:41:13.636658   80180 kubeadm.go:739] kubelet initialised
	I0717 18:41:13.636688   80180 kubeadm.go:740] duration metric: took 5.830484ms waiting for restarted kubelet to initialise ...
	I0717 18:41:13.636699   80180 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:41:13.642171   80180 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.650539   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.650573   80180 pod_ready.go:81] duration metric: took 8.374432ms for pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.650585   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.650599   80180 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.655470   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "etcd-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.655500   80180 pod_ready.go:81] duration metric: took 4.8911ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.655512   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "etcd-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.655520   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.662448   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.662479   80180 pod_ready.go:81] duration metric: took 6.949002ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.662490   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.662499   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.757454   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.757485   80180 pod_ready.go:81] duration metric: took 94.976348ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.757494   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.757501   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:14.157339   80180 pod_ready.go:92] pod "kube-proxy-jltfs" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:14.157363   80180 pod_ready.go:81] duration metric: took 399.852649ms for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:14.157381   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:16.163623   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:15.825045   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:18.323440   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:16.596320   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:18.596807   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:15.259558   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:15.759585   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.259850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.760009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.259385   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.759208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.259218   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.759779   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.259666   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.759781   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.174371   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:20.664423   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:22.663932   80180 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:22.663955   80180 pod_ready.go:81] duration metric: took 8.506565077s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:22.663969   80180 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:20.324547   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:22.824318   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:21.096071   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:23.596775   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:20.259286   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:20.759048   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.259801   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.759595   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.259582   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.759871   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.259349   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.759659   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.259964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.759899   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.671105   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:27.170247   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:24.825017   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:26.825067   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:26.096196   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:28.097501   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:25.259559   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:25.759773   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.759924   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.259509   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.759986   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.259792   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.759564   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:29.259060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:29.259143   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:29.298974   80857 cri.go:89] found id: ""
	I0717 18:41:29.299006   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.299016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:29.299024   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:29.299087   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:29.333764   80857 cri.go:89] found id: ""
	I0717 18:41:29.333786   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.333793   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:29.333801   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:29.333849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:29.369639   80857 cri.go:89] found id: ""
	I0717 18:41:29.369674   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.369688   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:29.369697   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:29.369762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:29.403453   80857 cri.go:89] found id: ""
	I0717 18:41:29.403481   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.403489   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:29.403498   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:29.403555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:29.436662   80857 cri.go:89] found id: ""
	I0717 18:41:29.436687   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.436695   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:29.436701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:29.436749   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:29.471013   80857 cri.go:89] found id: ""
	I0717 18:41:29.471053   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.471064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:29.471074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:29.471139   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:29.502754   80857 cri.go:89] found id: ""
	I0717 18:41:29.502780   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.502787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:29.502793   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:29.502842   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:29.534205   80857 cri.go:89] found id: ""
	I0717 18:41:29.534232   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.534239   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:29.534247   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:29.534259   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:29.585406   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:29.585438   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:29.600629   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:29.600660   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:29.719788   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:29.719807   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:29.719819   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:29.785626   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:29.785662   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:29.669918   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:31.670544   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:29.325013   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:31.828532   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:30.097685   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:32.596760   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:32.325522   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:32.338046   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:32.338120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:32.370073   80857 cri.go:89] found id: ""
	I0717 18:41:32.370099   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.370106   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:32.370112   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:32.370165   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:32.408764   80857 cri.go:89] found id: ""
	I0717 18:41:32.408789   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.408799   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:32.408806   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:32.408862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:32.449078   80857 cri.go:89] found id: ""
	I0717 18:41:32.449108   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.449118   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:32.449125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:32.449176   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:32.481990   80857 cri.go:89] found id: ""
	I0717 18:41:32.482015   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.482022   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:32.482028   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:32.482077   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:32.521902   80857 cri.go:89] found id: ""
	I0717 18:41:32.521932   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.521942   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:32.521949   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:32.521997   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:32.554148   80857 cri.go:89] found id: ""
	I0717 18:41:32.554177   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.554206   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:32.554216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:32.554270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:32.587342   80857 cri.go:89] found id: ""
	I0717 18:41:32.587366   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.587374   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:32.587379   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:32.587425   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:32.619227   80857 cri.go:89] found id: ""
	I0717 18:41:32.619259   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.619270   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:32.619281   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:32.619296   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:32.669085   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:32.669124   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:32.682464   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:32.682500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:32.749218   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:32.749234   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:32.749245   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:32.814510   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:32.814545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:33.670578   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.670952   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:37.671373   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:34.324458   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:36.823615   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:38.825194   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.096041   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:37.096436   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:39.096906   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.362866   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:35.375563   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:35.375643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:35.412355   80857 cri.go:89] found id: ""
	I0717 18:41:35.412380   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.412388   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:35.412393   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:35.412439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:35.446596   80857 cri.go:89] found id: ""
	I0717 18:41:35.446621   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.446629   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:35.446634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:35.446691   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:35.481695   80857 cri.go:89] found id: ""
	I0717 18:41:35.481717   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.481725   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:35.481730   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:35.481783   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:35.514528   80857 cri.go:89] found id: ""
	I0717 18:41:35.514573   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.514584   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:35.514592   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:35.514657   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:35.547831   80857 cri.go:89] found id: ""
	I0717 18:41:35.547858   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.547871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:35.547879   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:35.547941   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:35.579059   80857 cri.go:89] found id: ""
	I0717 18:41:35.579084   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.579097   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:35.579104   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:35.579164   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:35.616442   80857 cri.go:89] found id: ""
	I0717 18:41:35.616480   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.616487   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:35.616492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:35.616545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:35.647535   80857 cri.go:89] found id: ""
	I0717 18:41:35.647564   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.647571   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:35.647579   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:35.647595   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:35.696664   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:35.696692   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:35.710474   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:35.710499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:35.785569   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:35.785595   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:35.785611   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:35.865750   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:35.865785   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:38.405391   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:38.417737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:38.417806   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:38.453848   80857 cri.go:89] found id: ""
	I0717 18:41:38.453877   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.453888   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:38.453895   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:38.453949   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:38.487083   80857 cri.go:89] found id: ""
	I0717 18:41:38.487112   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.487122   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:38.487129   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:38.487190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:38.517700   80857 cri.go:89] found id: ""
	I0717 18:41:38.517729   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.517738   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:38.517746   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:38.517808   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:38.547587   80857 cri.go:89] found id: ""
	I0717 18:41:38.547616   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.547625   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:38.547632   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:38.547780   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:38.581511   80857 cri.go:89] found id: ""
	I0717 18:41:38.581535   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.581542   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:38.581548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:38.581675   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:38.618308   80857 cri.go:89] found id: ""
	I0717 18:41:38.618327   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.618334   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:38.618340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:38.618401   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:38.658237   80857 cri.go:89] found id: ""
	I0717 18:41:38.658267   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.658278   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:38.658298   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:38.658359   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:38.694044   80857 cri.go:89] found id: ""
	I0717 18:41:38.694071   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.694080   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:38.694090   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:38.694106   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:38.746621   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:38.746658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:38.758781   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:38.758805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:38.827327   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:38.827345   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:38.827357   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:38.899731   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:38.899762   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:40.170106   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:42.170391   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:40.825940   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:43.327489   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:41.097668   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:43.597625   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:41.437479   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:41.451264   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:41.451336   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:41.489053   80857 cri.go:89] found id: ""
	I0717 18:41:41.489083   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.489093   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:41.489101   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:41.489162   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:41.521954   80857 cri.go:89] found id: ""
	I0717 18:41:41.521985   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.521996   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:41.522003   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:41.522068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:41.556847   80857 cri.go:89] found id: ""
	I0717 18:41:41.556875   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.556884   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:41.556893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:41.557024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:41.591232   80857 cri.go:89] found id: ""
	I0717 18:41:41.591255   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.591263   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:41.591269   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:41.591315   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:41.624533   80857 cri.go:89] found id: ""
	I0717 18:41:41.624565   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.624576   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:41.624583   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:41.624644   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:41.656033   80857 cri.go:89] found id: ""
	I0717 18:41:41.656063   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.656073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:41.656080   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:41.656140   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:41.691686   80857 cri.go:89] found id: ""
	I0717 18:41:41.691715   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.691725   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:41.691732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:41.691789   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:41.724688   80857 cri.go:89] found id: ""
	I0717 18:41:41.724718   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.724729   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:41.724741   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:41.724760   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:41.802855   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:41.802882   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:41.839242   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:41.839271   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:41.889028   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:41.889058   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:41.901598   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:41.901627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:41.972632   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.472824   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:44.487673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:44.487745   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:44.530173   80857 cri.go:89] found id: ""
	I0717 18:41:44.530204   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.530216   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:44.530224   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:44.530288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:44.577865   80857 cri.go:89] found id: ""
	I0717 18:41:44.577891   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.577899   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:44.577905   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:44.577967   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:44.621528   80857 cri.go:89] found id: ""
	I0717 18:41:44.621551   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.621559   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:44.621564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:44.621622   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:44.655456   80857 cri.go:89] found id: ""
	I0717 18:41:44.655488   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.655498   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:44.655505   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:44.655570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:44.688729   80857 cri.go:89] found id: ""
	I0717 18:41:44.688757   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.688767   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:44.688774   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:44.688832   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:44.720190   80857 cri.go:89] found id: ""
	I0717 18:41:44.720220   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.720231   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:44.720238   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:44.720294   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:44.750109   80857 cri.go:89] found id: ""
	I0717 18:41:44.750135   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.750142   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:44.750147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:44.750203   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:44.780039   80857 cri.go:89] found id: ""
	I0717 18:41:44.780066   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.780090   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:44.780098   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:44.780111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:44.829641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:44.829675   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:44.842587   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:44.842616   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:44.906331   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.906355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:44.906369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:44.983364   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:44.983400   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:44.671557   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:47.170565   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:45.827780   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:48.324627   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:46.096988   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:48.596469   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:47.525057   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:47.538586   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:47.538639   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:47.574805   80857 cri.go:89] found id: ""
	I0717 18:41:47.574832   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.574843   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:47.574849   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:47.574906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:47.609576   80857 cri.go:89] found id: ""
	I0717 18:41:47.609603   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.609611   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:47.609617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:47.609662   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:47.643899   80857 cri.go:89] found id: ""
	I0717 18:41:47.643927   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.643936   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:47.643941   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:47.643990   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:47.680365   80857 cri.go:89] found id: ""
	I0717 18:41:47.680404   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.680412   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:47.680418   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:47.680475   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:47.719038   80857 cri.go:89] found id: ""
	I0717 18:41:47.719061   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.719069   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:47.719074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:47.719118   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:47.751708   80857 cri.go:89] found id: ""
	I0717 18:41:47.751735   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.751744   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:47.751750   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:47.751807   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:47.789803   80857 cri.go:89] found id: ""
	I0717 18:41:47.789838   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.789850   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:47.789858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:47.789921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:47.821450   80857 cri.go:89] found id: ""
	I0717 18:41:47.821477   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.821487   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:47.821496   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:47.821511   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:47.886501   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:47.886526   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:47.886544   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:47.960142   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:47.960177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:47.995012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:47.995046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:48.046848   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:48.046884   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:49.670208   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:52.169471   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.824876   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:53.324628   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.597215   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:53.096114   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.560990   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:50.574906   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:50.575051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:50.607647   80857 cri.go:89] found id: ""
	I0717 18:41:50.607674   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.607687   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:50.607696   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:50.607756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:50.640621   80857 cri.go:89] found id: ""
	I0717 18:41:50.640651   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.640660   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:50.640667   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:50.640741   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:50.675269   80857 cri.go:89] found id: ""
	I0717 18:41:50.675293   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.675303   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:50.675313   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:50.675369   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:50.707915   80857 cri.go:89] found id: ""
	I0717 18:41:50.707938   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.707946   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:50.707951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:50.708006   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:50.741149   80857 cri.go:89] found id: ""
	I0717 18:41:50.741170   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.741178   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:50.741184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:50.741288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:50.772768   80857 cri.go:89] found id: ""
	I0717 18:41:50.772792   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.772799   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:50.772804   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:50.772854   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:50.804996   80857 cri.go:89] found id: ""
	I0717 18:41:50.805018   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.805028   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:50.805035   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:50.805094   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:50.838933   80857 cri.go:89] found id: ""
	I0717 18:41:50.838960   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.838971   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:50.838982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:50.838997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:50.886415   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:50.886444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:50.899024   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:50.899049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:50.965388   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:50.965416   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:50.965434   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:51.044449   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:51.044490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.580749   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:53.593759   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:53.593841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:53.626541   80857 cri.go:89] found id: ""
	I0717 18:41:53.626573   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.626582   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:53.626588   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:53.626645   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:53.658492   80857 cri.go:89] found id: ""
	I0717 18:41:53.658520   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.658529   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:53.658537   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:53.658600   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:53.694546   80857 cri.go:89] found id: ""
	I0717 18:41:53.694582   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.694590   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:53.694595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:53.694650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:53.727028   80857 cri.go:89] found id: ""
	I0717 18:41:53.727053   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.727061   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:53.727067   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:53.727129   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:53.762869   80857 cri.go:89] found id: ""
	I0717 18:41:53.762897   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.762906   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:53.762913   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:53.762976   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:53.794133   80857 cri.go:89] found id: ""
	I0717 18:41:53.794158   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.794166   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:53.794172   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:53.794225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:53.828432   80857 cri.go:89] found id: ""
	I0717 18:41:53.828463   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.828473   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:53.828484   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:53.828546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:53.863316   80857 cri.go:89] found id: ""
	I0717 18:41:53.863345   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.863353   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:53.863362   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:53.863384   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.897353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:53.897380   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:53.944213   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:53.944242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:53.957484   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:53.957509   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:54.025962   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:54.025992   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:54.026006   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:54.170642   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:56.672407   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:55.325017   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:57.823877   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:55.596492   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:58.096397   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:56.609502   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:56.621849   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:56.621913   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:56.657469   80857 cri.go:89] found id: ""
	I0717 18:41:56.657498   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.657510   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:56.657517   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:56.657579   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:56.691298   80857 cri.go:89] found id: ""
	I0717 18:41:56.691320   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.691327   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:56.691332   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:56.691386   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:56.723305   80857 cri.go:89] found id: ""
	I0717 18:41:56.723334   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.723344   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:56.723352   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:56.723417   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:56.755893   80857 cri.go:89] found id: ""
	I0717 18:41:56.755918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.755926   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:56.755931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:56.755982   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:56.787777   80857 cri.go:89] found id: ""
	I0717 18:41:56.787807   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.787819   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:56.787828   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:56.787894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:56.821126   80857 cri.go:89] found id: ""
	I0717 18:41:56.821152   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.821163   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:56.821170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:56.821228   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:56.855894   80857 cri.go:89] found id: ""
	I0717 18:41:56.855918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.855926   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:56.855931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:56.855980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:56.893483   80857 cri.go:89] found id: ""
	I0717 18:41:56.893505   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.893512   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:56.893521   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:56.893532   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:56.945355   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:56.945385   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:56.958426   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:56.958451   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:57.025542   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:57.025571   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:57.025585   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:57.100497   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:57.100528   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:59.636400   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:59.648517   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:59.648571   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:59.683954   80857 cri.go:89] found id: ""
	I0717 18:41:59.683978   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.683988   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:59.683995   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:59.684065   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:59.719135   80857 cri.go:89] found id: ""
	I0717 18:41:59.719162   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.719172   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:59.719179   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:59.719243   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:59.755980   80857 cri.go:89] found id: ""
	I0717 18:41:59.756012   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.756023   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:59.756030   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:59.756091   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:59.788147   80857 cri.go:89] found id: ""
	I0717 18:41:59.788176   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.788185   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:59.788191   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:59.788239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:59.819646   80857 cri.go:89] found id: ""
	I0717 18:41:59.819670   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.819679   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:59.819685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:59.819738   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:59.852487   80857 cri.go:89] found id: ""
	I0717 18:41:59.852508   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.852516   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:59.852521   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:59.852586   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:59.883761   80857 cri.go:89] found id: ""
	I0717 18:41:59.883794   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.883805   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:59.883812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:59.883870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:59.914854   80857 cri.go:89] found id: ""
	I0717 18:41:59.914882   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.914889   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:59.914896   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:59.914909   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:59.995619   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:59.995650   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:00.034444   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:00.034472   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:59.172253   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:01.670422   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:59.824347   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:01.824444   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:03.826580   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:00.096457   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:02.596587   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:00.084278   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:00.084308   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:00.097771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:00.097796   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:00.161753   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:02.662134   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:02.676200   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:02.676277   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:02.711606   80857 cri.go:89] found id: ""
	I0717 18:42:02.711640   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.711652   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:02.711659   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:02.711711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:02.744704   80857 cri.go:89] found id: ""
	I0717 18:42:02.744728   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.744735   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:02.744741   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:02.744800   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:02.778815   80857 cri.go:89] found id: ""
	I0717 18:42:02.778846   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.778859   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:02.778868   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:02.778936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:02.810896   80857 cri.go:89] found id: ""
	I0717 18:42:02.810928   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.810941   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:02.810950   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:02.811024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:02.843868   80857 cri.go:89] found id: ""
	I0717 18:42:02.843892   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.843903   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:02.843910   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:02.843972   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:02.876311   80857 cri.go:89] found id: ""
	I0717 18:42:02.876338   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.876348   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:02.876356   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:02.876420   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:02.910752   80857 cri.go:89] found id: ""
	I0717 18:42:02.910776   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.910784   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:02.910789   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:02.910835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:02.947286   80857 cri.go:89] found id: ""
	I0717 18:42:02.947318   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.947328   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:02.947337   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:02.947351   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:02.999512   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:02.999542   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:03.014063   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:03.014094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:03.081822   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:03.081844   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:03.081858   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:03.161088   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:03.161117   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:04.171168   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:06.669508   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:06.324608   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:08.825084   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:04.597129   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:07.098716   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:05.699198   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:05.711597   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:05.711654   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:05.749653   80857 cri.go:89] found id: ""
	I0717 18:42:05.749684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.749694   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:05.749703   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:05.749757   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:05.785095   80857 cri.go:89] found id: ""
	I0717 18:42:05.785118   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.785125   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:05.785134   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:05.785179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:05.818085   80857 cri.go:89] found id: ""
	I0717 18:42:05.818111   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.818119   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:05.818125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:05.818171   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:05.851872   80857 cri.go:89] found id: ""
	I0717 18:42:05.851895   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.851902   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:05.851907   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:05.851958   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:05.883924   80857 cri.go:89] found id: ""
	I0717 18:42:05.883948   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.883958   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:05.883965   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:05.884025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:05.916365   80857 cri.go:89] found id: ""
	I0717 18:42:05.916396   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.916407   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:05.916414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:05.916473   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:05.950656   80857 cri.go:89] found id: ""
	I0717 18:42:05.950684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.950695   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:05.950701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:05.950762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:05.992132   80857 cri.go:89] found id: ""
	I0717 18:42:05.992160   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.992169   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:05.992177   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:05.992190   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:06.042162   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:06.042192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:06.055594   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:06.055619   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:06.123007   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:06.123038   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:06.123068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:06.200429   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:06.200460   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:08.739039   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:08.751520   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:08.751575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:08.783765   80857 cri.go:89] found id: ""
	I0717 18:42:08.783794   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.783805   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:08.783812   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:08.783864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:08.815200   80857 cri.go:89] found id: ""
	I0717 18:42:08.815227   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.815236   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:08.815242   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:08.815289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:08.848970   80857 cri.go:89] found id: ""
	I0717 18:42:08.849002   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.849012   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:08.849021   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:08.849084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:08.881832   80857 cri.go:89] found id: ""
	I0717 18:42:08.881859   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.881866   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:08.881874   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:08.881922   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:08.913119   80857 cri.go:89] found id: ""
	I0717 18:42:08.913142   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.913149   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:08.913155   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:08.913201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:08.947471   80857 cri.go:89] found id: ""
	I0717 18:42:08.947499   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.947509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:08.947515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:08.947570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:08.979570   80857 cri.go:89] found id: ""
	I0717 18:42:08.979599   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.979609   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:08.979615   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:08.979670   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:09.012960   80857 cri.go:89] found id: ""
	I0717 18:42:09.012991   80857 logs.go:276] 0 containers: []
	W0717 18:42:09.013002   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:09.013012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:09.013027   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:09.065732   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:09.065769   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:09.079572   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:09.079602   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:09.151737   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:09.151754   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:09.151766   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:09.230185   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:09.230218   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:08.670185   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:10.671336   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.325340   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:13.824087   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:09.595757   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.596784   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:14.096765   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.767189   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:11.780044   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:11.780115   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:11.812700   80857 cri.go:89] found id: ""
	I0717 18:42:11.812722   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.812730   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:11.812736   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:11.812781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:11.846855   80857 cri.go:89] found id: ""
	I0717 18:42:11.846883   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.846893   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:11.846900   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:11.846962   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:11.877671   80857 cri.go:89] found id: ""
	I0717 18:42:11.877700   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.877710   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:11.877716   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:11.877767   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:11.908703   80857 cri.go:89] found id: ""
	I0717 18:42:11.908728   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.908735   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:11.908740   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:11.908786   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:11.942191   80857 cri.go:89] found id: ""
	I0717 18:42:11.942218   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.942225   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:11.942231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:11.942284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:11.974751   80857 cri.go:89] found id: ""
	I0717 18:42:11.974782   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.974798   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:11.974807   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:11.974876   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:12.006287   80857 cri.go:89] found id: ""
	I0717 18:42:12.006317   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.006327   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:12.006335   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:12.006396   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:12.036524   80857 cri.go:89] found id: ""
	I0717 18:42:12.036546   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.036554   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:12.036575   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:12.036599   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:12.085073   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:12.085109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:12.098908   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:12.098937   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:12.161665   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:12.161687   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:12.161702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:12.240349   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:12.240401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:14.781101   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:14.794081   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:14.794149   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:14.828975   80857 cri.go:89] found id: ""
	I0717 18:42:14.829003   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.829013   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:14.829021   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:14.829072   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:14.864858   80857 cri.go:89] found id: ""
	I0717 18:42:14.864886   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.864896   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:14.864903   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:14.864986   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:14.897961   80857 cri.go:89] found id: ""
	I0717 18:42:14.897983   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.897991   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:14.897996   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:14.898041   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:14.935499   80857 cri.go:89] found id: ""
	I0717 18:42:14.935521   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.935529   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:14.935534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:14.935591   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:14.967581   80857 cri.go:89] found id: ""
	I0717 18:42:14.967605   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.967621   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:14.967629   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:14.967688   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:15.001844   80857 cri.go:89] found id: ""
	I0717 18:42:15.001876   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.001888   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:15.001894   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:15.001942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:15.038940   80857 cri.go:89] found id: ""
	I0717 18:42:15.038967   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.038977   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:15.038985   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:15.039043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:13.170111   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:15.669712   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:17.669916   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:16.325511   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:18.823820   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:16.597587   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:19.096905   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:15.072636   80857 cri.go:89] found id: ""
	I0717 18:42:15.072665   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.072677   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:15.072688   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:15.072703   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:15.124889   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:15.124934   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:15.138661   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:15.138691   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:15.208762   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:15.208791   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:15.208806   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:15.281302   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:15.281336   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:17.817136   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:17.831013   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:17.831078   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:17.867065   80857 cri.go:89] found id: ""
	I0717 18:42:17.867091   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.867101   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:17.867108   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:17.867166   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:17.904143   80857 cri.go:89] found id: ""
	I0717 18:42:17.904171   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.904180   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:17.904188   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:17.904248   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:17.937450   80857 cri.go:89] found id: ""
	I0717 18:42:17.937478   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.937487   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:17.937492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:17.937556   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:17.970650   80857 cri.go:89] found id: ""
	I0717 18:42:17.970679   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.970689   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:17.970696   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:17.970754   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:18.002329   80857 cri.go:89] found id: ""
	I0717 18:42:18.002355   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.002364   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:18.002371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:18.002430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:18.035253   80857 cri.go:89] found id: ""
	I0717 18:42:18.035278   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.035288   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:18.035295   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:18.035356   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:18.070386   80857 cri.go:89] found id: ""
	I0717 18:42:18.070419   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.070431   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:18.070439   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:18.070507   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:18.106148   80857 cri.go:89] found id: ""
	I0717 18:42:18.106170   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.106177   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:18.106185   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:18.106201   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:18.157359   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:18.157390   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:18.171757   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:18.171782   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:18.242795   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:18.242818   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:18.242831   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:18.316221   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:18.316255   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:19.670562   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:22.171111   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:20.824266   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:22.824366   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:21.596773   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:24.098051   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:20.857953   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:20.870813   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:20.870882   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:20.906033   80857 cri.go:89] found id: ""
	I0717 18:42:20.906065   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.906075   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:20.906083   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:20.906142   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:20.942292   80857 cri.go:89] found id: ""
	I0717 18:42:20.942316   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.942335   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:20.942342   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:20.942403   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:20.985113   80857 cri.go:89] found id: ""
	I0717 18:42:20.985143   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.985151   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:20.985157   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:20.985217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:21.021807   80857 cri.go:89] found id: ""
	I0717 18:42:21.021834   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.021842   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:21.021847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:21.021906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:21.061924   80857 cri.go:89] found id: ""
	I0717 18:42:21.061949   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.061961   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:21.061969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:21.062025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:21.098890   80857 cri.go:89] found id: ""
	I0717 18:42:21.098916   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.098927   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:21.098935   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:21.098991   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:21.132576   80857 cri.go:89] found id: ""
	I0717 18:42:21.132612   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.132621   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:21.132627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:21.132687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:21.167723   80857 cri.go:89] found id: ""
	I0717 18:42:21.167765   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.167778   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:21.167788   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:21.167803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:21.220427   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:21.220461   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:21.233191   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:21.233216   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:21.304462   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:21.304481   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:21.304498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:21.386887   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:21.386925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:23.926518   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:23.940470   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:23.940534   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:23.976739   80857 cri.go:89] found id: ""
	I0717 18:42:23.976763   80857 logs.go:276] 0 containers: []
	W0717 18:42:23.976773   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:23.976778   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:23.976838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:24.007575   80857 cri.go:89] found id: ""
	I0717 18:42:24.007603   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.007612   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:24.007617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:24.007671   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:24.040430   80857 cri.go:89] found id: ""
	I0717 18:42:24.040455   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.040463   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:24.040468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:24.040581   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:24.071602   80857 cri.go:89] found id: ""
	I0717 18:42:24.071629   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.071638   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:24.071644   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:24.071705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:24.109570   80857 cri.go:89] found id: ""
	I0717 18:42:24.109595   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.109602   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:24.109607   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:24.109667   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:24.144284   80857 cri.go:89] found id: ""
	I0717 18:42:24.144305   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.144328   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:24.144333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:24.144382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:24.179441   80857 cri.go:89] found id: ""
	I0717 18:42:24.179467   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.179474   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:24.179479   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:24.179545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:24.222100   80857 cri.go:89] found id: ""
	I0717 18:42:24.222133   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.222143   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:24.222159   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:24.222175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:24.273181   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:24.273215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:24.285835   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:24.285861   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:24.357804   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:24.357826   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:24.357839   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:24.437270   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:24.437310   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:24.670033   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.671014   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:24.824543   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:27.325296   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.597795   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:29.098055   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.979543   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:26.992443   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:26.992497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:27.025520   80857 cri.go:89] found id: ""
	I0717 18:42:27.025548   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.025560   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:27.025567   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:27.025630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:27.059971   80857 cri.go:89] found id: ""
	I0717 18:42:27.060002   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.060011   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:27.060016   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:27.060068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:27.091370   80857 cri.go:89] found id: ""
	I0717 18:42:27.091397   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.091407   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:27.091415   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:27.091468   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:27.123736   80857 cri.go:89] found id: ""
	I0717 18:42:27.123768   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.123779   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:27.123786   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:27.123849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:27.156155   80857 cri.go:89] found id: ""
	I0717 18:42:27.156177   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.156185   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:27.156190   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:27.156239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:27.190701   80857 cri.go:89] found id: ""
	I0717 18:42:27.190729   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.190741   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:27.190749   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:27.190825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:27.222093   80857 cri.go:89] found id: ""
	I0717 18:42:27.222119   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.222130   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:27.222137   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:27.222199   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:27.258789   80857 cri.go:89] found id: ""
	I0717 18:42:27.258813   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.258824   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:27.258834   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:27.258848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:27.307033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:27.307068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:27.321181   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:27.321209   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:27.390560   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:27.390593   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:27.390613   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:27.464352   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:27.464389   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:30.005732   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:30.019088   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:30.019160   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:29.170578   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.670221   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:29.327610   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.824292   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:33.824392   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.595937   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:33.597622   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:30.052733   80857 cri.go:89] found id: ""
	I0717 18:42:30.052757   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.052765   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:30.052775   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:30.052836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:30.087683   80857 cri.go:89] found id: ""
	I0717 18:42:30.087711   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.087722   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:30.087729   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:30.087774   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:30.124371   80857 cri.go:89] found id: ""
	I0717 18:42:30.124404   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.124416   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:30.124432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:30.124487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:30.160081   80857 cri.go:89] found id: ""
	I0717 18:42:30.160107   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.160115   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:30.160122   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:30.160173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:30.194420   80857 cri.go:89] found id: ""
	I0717 18:42:30.194447   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.194456   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:30.194464   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:30.194522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:30.229544   80857 cri.go:89] found id: ""
	I0717 18:42:30.229570   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.229584   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:30.229591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:30.229650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:30.264164   80857 cri.go:89] found id: ""
	I0717 18:42:30.264193   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.264204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:30.264211   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:30.264266   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:30.296958   80857 cri.go:89] found id: ""
	I0717 18:42:30.296986   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.296996   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:30.297008   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:30.297049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:30.348116   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:30.348145   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:30.361373   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:30.361401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:30.429601   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:30.429620   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:30.429634   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:30.507718   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:30.507752   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:33.045539   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:33.058149   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:33.058219   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:33.088675   80857 cri.go:89] found id: ""
	I0717 18:42:33.088702   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.088710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:33.088717   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:33.088773   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:33.121269   80857 cri.go:89] found id: ""
	I0717 18:42:33.121297   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.121308   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:33.121315   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:33.121375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:33.156144   80857 cri.go:89] found id: ""
	I0717 18:42:33.156173   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.156184   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:33.156192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:33.156257   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:33.188559   80857 cri.go:89] found id: ""
	I0717 18:42:33.188585   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.188597   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:33.188603   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:33.188651   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:33.219650   80857 cri.go:89] found id: ""
	I0717 18:42:33.219672   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.219680   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:33.219686   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:33.219746   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:33.249704   80857 cri.go:89] found id: ""
	I0717 18:42:33.249728   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.249737   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:33.249742   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:33.249793   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:33.283480   80857 cri.go:89] found id: ""
	I0717 18:42:33.283503   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.283511   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:33.283516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:33.283560   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:33.314577   80857 cri.go:89] found id: ""
	I0717 18:42:33.314620   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.314629   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:33.314638   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:33.314649   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:33.363458   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:33.363491   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:33.377240   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:33.377267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:33.442939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:33.442961   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:33.442976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:33.522422   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:33.522456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:34.170638   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.171034   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.324780   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:38.824832   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.097788   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:38.596054   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.063823   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:36.078272   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:36.078342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:36.111460   80857 cri.go:89] found id: ""
	I0717 18:42:36.111494   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.111502   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:36.111509   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:36.111562   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:36.144191   80857 cri.go:89] found id: ""
	I0717 18:42:36.144222   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.144232   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:36.144239   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:36.144306   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:36.177247   80857 cri.go:89] found id: ""
	I0717 18:42:36.177277   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.177288   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:36.177294   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:36.177350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:36.213390   80857 cri.go:89] found id: ""
	I0717 18:42:36.213419   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.213427   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:36.213433   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:36.213493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:36.246775   80857 cri.go:89] found id: ""
	I0717 18:42:36.246799   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.246807   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:36.246812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:36.246870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:36.282441   80857 cri.go:89] found id: ""
	I0717 18:42:36.282463   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.282470   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:36.282476   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:36.282529   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:36.314178   80857 cri.go:89] found id: ""
	I0717 18:42:36.314203   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.314211   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:36.314216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:36.314265   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:36.353705   80857 cri.go:89] found id: ""
	I0717 18:42:36.353730   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.353737   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:36.353746   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:36.353758   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:36.370866   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:36.370894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:36.463660   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:36.463693   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:36.463710   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:36.540337   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:36.540371   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:36.575770   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:36.575801   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.128675   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:39.141187   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:39.141255   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:39.175960   80857 cri.go:89] found id: ""
	I0717 18:42:39.175982   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.175989   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:39.175994   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:39.176051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:39.209442   80857 cri.go:89] found id: ""
	I0717 18:42:39.209472   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.209483   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:39.209490   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:39.209552   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:39.243225   80857 cri.go:89] found id: ""
	I0717 18:42:39.243249   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.243256   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:39.243262   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:39.243309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:39.277369   80857 cri.go:89] found id: ""
	I0717 18:42:39.277396   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.277407   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:39.277414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:39.277464   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:39.310522   80857 cri.go:89] found id: ""
	I0717 18:42:39.310552   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.310563   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:39.310570   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:39.310637   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:39.344186   80857 cri.go:89] found id: ""
	I0717 18:42:39.344208   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.344216   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:39.344221   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:39.344279   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:39.375329   80857 cri.go:89] found id: ""
	I0717 18:42:39.375354   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.375366   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:39.375372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:39.375419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:39.412629   80857 cri.go:89] found id: ""
	I0717 18:42:39.412659   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.412668   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:39.412679   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:39.412696   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:39.447607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:39.447644   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.498981   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:39.499013   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:39.512380   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:39.512409   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:39.580396   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:39.580415   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:39.580428   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:38.670213   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:41.170284   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:40.825257   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:43.324155   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:40.596267   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:42.597199   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:42.158145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:42.177450   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:42.177522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:42.222849   80857 cri.go:89] found id: ""
	I0717 18:42:42.222880   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.222890   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:42.222897   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:42.222954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:42.252712   80857 cri.go:89] found id: ""
	I0717 18:42:42.252742   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.252752   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:42.252757   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:42.252802   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:42.283764   80857 cri.go:89] found id: ""
	I0717 18:42:42.283789   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.283799   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:42.283806   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:42.283864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:42.317243   80857 cri.go:89] found id: ""
	I0717 18:42:42.317270   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.317281   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:42.317288   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:42.317350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:42.349972   80857 cri.go:89] found id: ""
	I0717 18:42:42.350000   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.350010   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:42.350017   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:42.350074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:42.382111   80857 cri.go:89] found id: ""
	I0717 18:42:42.382146   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.382158   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:42.382165   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:42.382223   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:42.414669   80857 cri.go:89] found id: ""
	I0717 18:42:42.414692   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.414700   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:42.414705   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:42.414765   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:42.446533   80857 cri.go:89] found id: ""
	I0717 18:42:42.446571   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.446579   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:42.446588   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:42.446603   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:42.522142   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:42.522165   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:42.522177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:42.602456   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:42.602493   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:42.642192   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:42.642221   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:42.695016   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:42.695046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:43.170955   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.670631   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.325626   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:47.824543   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.097244   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:47.097783   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.208310   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:45.221821   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:45.221901   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:45.256887   80857 cri.go:89] found id: ""
	I0717 18:42:45.256914   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.256924   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:45.256930   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:45.256999   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:45.293713   80857 cri.go:89] found id: ""
	I0717 18:42:45.293735   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.293748   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:45.293753   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:45.293799   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:45.328790   80857 cri.go:89] found id: ""
	I0717 18:42:45.328815   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.328824   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:45.328833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:45.328880   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:45.364977   80857 cri.go:89] found id: ""
	I0717 18:42:45.365004   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.365014   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:45.365022   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:45.365084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:45.401131   80857 cri.go:89] found id: ""
	I0717 18:42:45.401157   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.401164   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:45.401170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:45.401217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:45.432252   80857 cri.go:89] found id: ""
	I0717 18:42:45.432279   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.432287   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:45.432293   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:45.432338   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:45.464636   80857 cri.go:89] found id: ""
	I0717 18:42:45.464659   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.464667   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:45.464674   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:45.464728   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:45.494884   80857 cri.go:89] found id: ""
	I0717 18:42:45.494913   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.494924   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:45.494935   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:45.494949   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:45.546578   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:45.546610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:45.559622   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:45.559647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:45.622094   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:45.622114   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:45.622126   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:45.699772   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:45.699814   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.241667   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:48.254205   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:48.254270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:48.293258   80857 cri.go:89] found id: ""
	I0717 18:42:48.293287   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.293298   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:48.293305   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:48.293362   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:48.328778   80857 cri.go:89] found id: ""
	I0717 18:42:48.328807   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.328818   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:48.328824   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:48.328884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:48.360230   80857 cri.go:89] found id: ""
	I0717 18:42:48.360256   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.360266   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:48.360276   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:48.360335   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:48.397770   80857 cri.go:89] found id: ""
	I0717 18:42:48.397797   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.397808   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:48.397815   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:48.397873   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:48.430912   80857 cri.go:89] found id: ""
	I0717 18:42:48.430938   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.430946   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:48.430956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:48.431015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:48.462659   80857 cri.go:89] found id: ""
	I0717 18:42:48.462688   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.462699   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:48.462706   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:48.462771   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:48.497554   80857 cri.go:89] found id: ""
	I0717 18:42:48.497584   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.497594   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:48.497601   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:48.497665   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:48.529524   80857 cri.go:89] found id: ""
	I0717 18:42:48.529547   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.529555   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:48.529564   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:48.529577   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:48.601265   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:48.601285   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:48.601297   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:48.678045   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:48.678075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.718565   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:48.718598   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:48.769923   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:48.769956   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:48.169777   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:50.669643   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.670334   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:50.324997   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.824163   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:49.596927   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.097602   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:51.282887   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:51.295778   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:51.295848   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:51.329324   80857 cri.go:89] found id: ""
	I0717 18:42:51.329351   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.329361   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:51.329369   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:51.329434   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:51.362013   80857 cri.go:89] found id: ""
	I0717 18:42:51.362042   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.362052   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:51.362059   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:51.362120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:51.395039   80857 cri.go:89] found id: ""
	I0717 18:42:51.395069   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.395080   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:51.395087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:51.395155   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:51.427683   80857 cri.go:89] found id: ""
	I0717 18:42:51.427709   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.427717   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:51.427722   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:51.427772   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:51.461683   80857 cri.go:89] found id: ""
	I0717 18:42:51.461706   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.461718   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:51.461723   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:51.461769   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:51.495780   80857 cri.go:89] found id: ""
	I0717 18:42:51.495802   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.495810   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:51.495816   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:51.495867   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:51.527541   80857 cri.go:89] found id: ""
	I0717 18:42:51.527573   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.527583   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:51.527591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:51.527648   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:51.567947   80857 cri.go:89] found id: ""
	I0717 18:42:51.567975   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.567987   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:51.567997   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:51.568014   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:51.620083   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:51.620109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:51.632823   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:51.632848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:51.705731   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:51.705753   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:51.705767   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:51.781969   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:51.782005   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.318011   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:54.331886   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:54.331942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:54.362935   80857 cri.go:89] found id: ""
	I0717 18:42:54.362962   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.362972   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:54.362979   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:54.363032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:54.396153   80857 cri.go:89] found id: ""
	I0717 18:42:54.396180   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.396191   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:54.396198   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:54.396259   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:54.433123   80857 cri.go:89] found id: ""
	I0717 18:42:54.433150   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.433160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:54.433168   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:54.433224   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:54.465034   80857 cri.go:89] found id: ""
	I0717 18:42:54.465064   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.465079   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:54.465087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:54.465200   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:54.496200   80857 cri.go:89] found id: ""
	I0717 18:42:54.496250   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.496263   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:54.496271   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:54.496332   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:54.528618   80857 cri.go:89] found id: ""
	I0717 18:42:54.528646   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.528656   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:54.528664   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:54.528724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:54.563018   80857 cri.go:89] found id: ""
	I0717 18:42:54.563042   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.563052   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:54.563059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:54.563114   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:54.595221   80857 cri.go:89] found id: ""
	I0717 18:42:54.595256   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.595266   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:54.595275   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:54.595291   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:54.608193   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:54.608220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:54.673755   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:54.673778   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:54.673793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:54.756443   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:54.756483   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.792670   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:54.792700   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:55.169224   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.169851   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:54.824614   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.324611   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:54.596824   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:56.597638   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:59.096992   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.344637   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:57.357003   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:57.357068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:57.389230   80857 cri.go:89] found id: ""
	I0717 18:42:57.389261   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.389271   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:57.389278   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:57.389372   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:57.421529   80857 cri.go:89] found id: ""
	I0717 18:42:57.421553   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.421571   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:57.421578   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:57.421642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:57.455154   80857 cri.go:89] found id: ""
	I0717 18:42:57.455186   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.455193   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:57.455199   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:57.455245   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:57.490576   80857 cri.go:89] found id: ""
	I0717 18:42:57.490608   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.490621   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:57.490630   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:57.490693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:57.523972   80857 cri.go:89] found id: ""
	I0717 18:42:57.524010   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.524023   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:57.524033   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:57.524092   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:57.558106   80857 cri.go:89] found id: ""
	I0717 18:42:57.558132   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.558140   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:57.558145   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:57.558201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:57.591009   80857 cri.go:89] found id: ""
	I0717 18:42:57.591035   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.591045   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:57.591051   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:57.591110   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:57.624564   80857 cri.go:89] found id: ""
	I0717 18:42:57.624592   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.624601   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:57.624612   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:57.624627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:57.699833   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:57.699868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:57.737029   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:57.737066   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:57.790562   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:57.790605   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:57.804935   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:57.804984   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:57.873081   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:59.170203   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.170348   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:59.325020   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.824876   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:03.825020   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.596885   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:03.597698   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:00.374166   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:00.388370   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:00.388443   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:00.421228   80857 cri.go:89] found id: ""
	I0717 18:43:00.421257   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.421268   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:00.421276   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:00.421325   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:00.451819   80857 cri.go:89] found id: ""
	I0717 18:43:00.451846   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.451856   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:00.451862   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:00.451917   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:00.482960   80857 cri.go:89] found id: ""
	I0717 18:43:00.482993   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.483004   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:00.483015   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:00.483074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:00.515860   80857 cri.go:89] found id: ""
	I0717 18:43:00.515882   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.515892   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:00.515899   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:00.515954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:00.548177   80857 cri.go:89] found id: ""
	I0717 18:43:00.548202   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.548212   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:00.548217   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:00.548275   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:00.580759   80857 cri.go:89] found id: ""
	I0717 18:43:00.580782   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.580790   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:00.580795   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:00.580847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:00.618661   80857 cri.go:89] found id: ""
	I0717 18:43:00.618683   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.618691   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:00.618699   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:00.618742   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:00.650503   80857 cri.go:89] found id: ""
	I0717 18:43:00.650528   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.650535   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:00.650544   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:00.650555   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:00.699668   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:00.699697   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:00.714086   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:00.714114   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:00.777051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:00.777087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:00.777105   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:00.859238   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:00.859274   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:03.399050   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:03.412565   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:03.412626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:03.445993   80857 cri.go:89] found id: ""
	I0717 18:43:03.446026   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.446038   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:03.446045   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:03.446101   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:03.481251   80857 cri.go:89] found id: ""
	I0717 18:43:03.481285   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.481297   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:03.481305   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:03.481371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:03.514406   80857 cri.go:89] found id: ""
	I0717 18:43:03.514433   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.514441   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:03.514447   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:03.514497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:03.546217   80857 cri.go:89] found id: ""
	I0717 18:43:03.546248   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.546258   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:03.546266   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:03.546327   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:03.577287   80857 cri.go:89] found id: ""
	I0717 18:43:03.577318   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.577333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:03.577340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:03.577394   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:03.610080   80857 cri.go:89] found id: ""
	I0717 18:43:03.610101   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.610109   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:03.610114   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:03.610159   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:03.643753   80857 cri.go:89] found id: ""
	I0717 18:43:03.643777   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.643787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:03.643792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:03.643849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:03.676290   80857 cri.go:89] found id: ""
	I0717 18:43:03.676338   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.676345   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:03.676353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:03.676364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:03.727818   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:03.727850   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:03.740752   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:03.740784   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:03.810465   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:03.810485   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:03.810499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:03.889326   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:03.889359   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:03.170473   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:05.170754   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:07.172145   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.323855   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:08.325019   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.096213   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:08.096443   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.426949   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:06.440007   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:06.440079   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:06.471689   80857 cri.go:89] found id: ""
	I0717 18:43:06.471715   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.471724   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:06.471729   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:06.471775   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:06.503818   80857 cri.go:89] found id: ""
	I0717 18:43:06.503840   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.503847   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:06.503853   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:06.503900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:06.534733   80857 cri.go:89] found id: ""
	I0717 18:43:06.534755   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.534763   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:06.534768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:06.534818   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:06.565388   80857 cri.go:89] found id: ""
	I0717 18:43:06.565414   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.565421   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:06.565431   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:06.565480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:06.597739   80857 cri.go:89] found id: ""
	I0717 18:43:06.597764   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.597775   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:06.597782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:06.597847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:06.629823   80857 cri.go:89] found id: ""
	I0717 18:43:06.629845   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.629853   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:06.629859   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:06.629921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:06.663753   80857 cri.go:89] found id: ""
	I0717 18:43:06.663779   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.663787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:06.663792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:06.663838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:06.700868   80857 cri.go:89] found id: ""
	I0717 18:43:06.700896   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.700906   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:06.700917   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:06.700932   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:06.753064   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:06.753097   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:06.765845   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:06.765868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:06.834691   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:06.834715   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:06.834729   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:06.908650   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:06.908682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:09.450804   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:09.463369   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:09.463452   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:09.506992   80857 cri.go:89] found id: ""
	I0717 18:43:09.507020   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.507028   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:09.507035   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:09.507093   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:09.543083   80857 cri.go:89] found id: ""
	I0717 18:43:09.543108   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.543116   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:09.543121   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:09.543174   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:09.576194   80857 cri.go:89] found id: ""
	I0717 18:43:09.576219   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.576226   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:09.576231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:09.576289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:09.610148   80857 cri.go:89] found id: ""
	I0717 18:43:09.610171   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.610178   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:09.610184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:09.610258   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:09.642217   80857 cri.go:89] found id: ""
	I0717 18:43:09.642246   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.642255   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:09.642263   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:09.642342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:09.678041   80857 cri.go:89] found id: ""
	I0717 18:43:09.678064   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.678073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:09.678079   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:09.678141   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:09.711162   80857 cri.go:89] found id: ""
	I0717 18:43:09.711193   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.711204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:09.711212   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:09.711272   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:09.746135   80857 cri.go:89] found id: ""
	I0717 18:43:09.746164   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.746175   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:09.746186   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:09.746197   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:09.799268   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:09.799303   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:09.811910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:09.811935   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:09.876939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:09.876982   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:09.876998   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:09.951468   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:09.951502   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:09.671086   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.170273   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:10.823628   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.824485   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:10.597216   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:13.096347   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.488926   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:12.501054   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:12.501112   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:12.532536   80857 cri.go:89] found id: ""
	I0717 18:43:12.532569   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.532577   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:12.532582   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:12.532629   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:12.565102   80857 cri.go:89] found id: ""
	I0717 18:43:12.565130   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.565141   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:12.565148   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:12.565208   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:12.600262   80857 cri.go:89] found id: ""
	I0717 18:43:12.600299   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.600309   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:12.600316   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:12.600366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:12.633950   80857 cri.go:89] found id: ""
	I0717 18:43:12.633980   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.633991   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:12.633998   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:12.634054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:12.673297   80857 cri.go:89] found id: ""
	I0717 18:43:12.673325   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.673338   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:12.673345   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:12.673406   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:12.707112   80857 cri.go:89] found id: ""
	I0717 18:43:12.707136   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.707144   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:12.707150   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:12.707206   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:12.746323   80857 cri.go:89] found id: ""
	I0717 18:43:12.746348   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.746358   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:12.746372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:12.746433   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:12.779470   80857 cri.go:89] found id: ""
	I0717 18:43:12.779496   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.779507   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:12.779518   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:12.779534   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:12.830156   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:12.830178   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:12.843707   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:12.843734   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:12.911849   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:12.911875   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:12.911891   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:12.986090   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:12.986122   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:14.170350   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:16.670284   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:14.824727   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:17.324146   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:15.096736   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:17.596689   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:15.523428   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:15.536012   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:15.536070   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:15.569179   80857 cri.go:89] found id: ""
	I0717 18:43:15.569208   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.569218   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:15.569225   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:15.569273   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:15.606727   80857 cri.go:89] found id: ""
	I0717 18:43:15.606749   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.606757   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:15.606763   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:15.606805   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:15.638842   80857 cri.go:89] found id: ""
	I0717 18:43:15.638873   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.638883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:15.638889   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:15.638939   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:15.671418   80857 cri.go:89] found id: ""
	I0717 18:43:15.671444   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.671453   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:15.671459   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:15.671517   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:15.704892   80857 cri.go:89] found id: ""
	I0717 18:43:15.704928   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.704937   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:15.704956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:15.705013   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:15.738478   80857 cri.go:89] found id: ""
	I0717 18:43:15.738502   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.738509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:15.738515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:15.738584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:15.771188   80857 cri.go:89] found id: ""
	I0717 18:43:15.771225   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.771237   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:15.771245   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:15.771303   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:15.807737   80857 cri.go:89] found id: ""
	I0717 18:43:15.807763   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.807770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:15.807779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:15.807790   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:15.861202   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:15.861234   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:15.874170   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:15.874200   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:15.938049   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:15.938073   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:15.938086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:16.025420   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:16.025456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:18.563320   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:18.575574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:18.575634   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:18.608673   80857 cri.go:89] found id: ""
	I0717 18:43:18.608700   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.608710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:18.608718   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:18.608782   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:18.641589   80857 cri.go:89] found id: ""
	I0717 18:43:18.641611   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.641618   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:18.641624   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:18.641679   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:18.672232   80857 cri.go:89] found id: ""
	I0717 18:43:18.672258   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.672268   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:18.672274   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:18.672331   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:18.706088   80857 cri.go:89] found id: ""
	I0717 18:43:18.706111   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.706118   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:18.706134   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:18.706179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:18.742475   80857 cri.go:89] found id: ""
	I0717 18:43:18.742503   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.742512   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:18.742518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:18.742575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:18.774141   80857 cri.go:89] found id: ""
	I0717 18:43:18.774169   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.774178   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:18.774183   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:18.774234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:18.806648   80857 cri.go:89] found id: ""
	I0717 18:43:18.806672   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.806679   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:18.806685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:18.806731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:18.838022   80857 cri.go:89] found id: ""
	I0717 18:43:18.838047   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.838054   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:18.838062   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:18.838076   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:18.903467   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:18.903487   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:18.903498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:18.980385   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:18.980432   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:19.020884   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:19.020914   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:19.073530   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:19.073574   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:19.169841   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.172793   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:19.824764   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.826081   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:20.095275   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:22.097120   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.587870   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:21.602130   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:21.602185   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:21.635373   80857 cri.go:89] found id: ""
	I0717 18:43:21.635401   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.635411   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:21.635418   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:21.635480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:21.667175   80857 cri.go:89] found id: ""
	I0717 18:43:21.667200   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.667209   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:21.667216   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:21.667267   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:21.705876   80857 cri.go:89] found id: ""
	I0717 18:43:21.705907   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.705918   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:21.705926   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:21.705988   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:21.753302   80857 cri.go:89] found id: ""
	I0717 18:43:21.753323   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.753330   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:21.753337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:21.753388   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:21.785363   80857 cri.go:89] found id: ""
	I0717 18:43:21.785390   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.785396   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:21.785402   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:21.785448   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:21.817517   80857 cri.go:89] found id: ""
	I0717 18:43:21.817545   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.817553   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:21.817560   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:21.817615   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:21.849451   80857 cri.go:89] found id: ""
	I0717 18:43:21.849478   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.849489   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:21.849497   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:21.849553   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:21.880032   80857 cri.go:89] found id: ""
	I0717 18:43:21.880055   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.880063   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:21.880073   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:21.880086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:21.928498   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:21.928530   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:21.941532   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:21.941565   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:22.014044   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:22.014066   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:22.014081   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:22.090789   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:22.090817   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:24.628401   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:24.643571   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:24.643642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:24.679262   80857 cri.go:89] found id: ""
	I0717 18:43:24.679288   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.679297   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:24.679303   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:24.679360   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:24.713043   80857 cri.go:89] found id: ""
	I0717 18:43:24.713073   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.713085   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:24.713092   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:24.713145   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:24.751459   80857 cri.go:89] found id: ""
	I0717 18:43:24.751496   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.751508   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:24.751518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:24.751584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:24.790793   80857 cri.go:89] found id: ""
	I0717 18:43:24.790820   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.790831   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:24.790838   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:24.790895   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:24.822909   80857 cri.go:89] found id: ""
	I0717 18:43:24.822936   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.822945   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:24.822953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:24.823016   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:24.855369   80857 cri.go:89] found id: ""
	I0717 18:43:24.855418   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.855455   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:24.855468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:24.855557   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:24.891080   80857 cri.go:89] found id: ""
	I0717 18:43:24.891110   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.891127   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:24.891133   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:24.891187   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:24.923679   80857 cri.go:89] found id: ""
	I0717 18:43:24.923812   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.923833   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:24.923847   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:24.923863   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:24.975469   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:24.975499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:24.988671   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:24.988702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 18:43:23.670616   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.171013   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:24.323858   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.324395   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:28.325125   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:24.596495   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.597134   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:29.096334   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	W0717 18:43:25.055191   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:25.055210   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:25.055223   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:25.138867   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:25.138900   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:27.678822   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:27.691422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:27.691483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:27.723979   80857 cri.go:89] found id: ""
	I0717 18:43:27.724008   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.724016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:27.724022   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:27.724067   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:27.756389   80857 cri.go:89] found id: ""
	I0717 18:43:27.756415   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.756423   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:27.756429   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:27.756476   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:27.787617   80857 cri.go:89] found id: ""
	I0717 18:43:27.787644   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.787652   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:27.787658   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:27.787705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:27.821688   80857 cri.go:89] found id: ""
	I0717 18:43:27.821716   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.821725   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:27.821732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:27.821787   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:27.855353   80857 cri.go:89] found id: ""
	I0717 18:43:27.855378   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.855386   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:27.855392   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:27.855439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:27.887885   80857 cri.go:89] found id: ""
	I0717 18:43:27.887909   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.887917   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:27.887923   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:27.887984   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:27.918797   80857 cri.go:89] found id: ""
	I0717 18:43:27.918820   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.918828   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:27.918833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:27.918884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:27.951255   80857 cri.go:89] found id: ""
	I0717 18:43:27.951283   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.951295   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:27.951306   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:27.951319   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:28.025476   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:28.025506   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:28.063994   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:28.064020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:28.117762   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:28.117805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:28.135688   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:28.135725   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:28.238770   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:28.172438   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.670703   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:32.674896   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.824443   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:33.324216   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:31.595533   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:33.597968   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.739930   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:30.754147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:30.754231   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:30.794454   80857 cri.go:89] found id: ""
	I0717 18:43:30.794479   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.794486   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:30.794491   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:30.794548   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:30.831643   80857 cri.go:89] found id: ""
	I0717 18:43:30.831666   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.831673   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:30.831678   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:30.831731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:30.863293   80857 cri.go:89] found id: ""
	I0717 18:43:30.863315   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.863323   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:30.863337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:30.863395   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:30.897830   80857 cri.go:89] found id: ""
	I0717 18:43:30.897859   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.897870   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:30.897877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:30.897929   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:30.933179   80857 cri.go:89] found id: ""
	I0717 18:43:30.933209   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.933220   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:30.933227   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:30.933289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:30.964730   80857 cri.go:89] found id: ""
	I0717 18:43:30.964759   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.964773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:30.964781   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:30.964825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:30.996330   80857 cri.go:89] found id: ""
	I0717 18:43:30.996353   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.996361   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:30.996367   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:30.996419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:31.028193   80857 cri.go:89] found id: ""
	I0717 18:43:31.028220   80857 logs.go:276] 0 containers: []
	W0717 18:43:31.028228   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:31.028237   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:31.028251   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:31.040465   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:31.040490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:31.108127   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:31.108150   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:31.108164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:31.187763   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:31.187797   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:31.224238   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:31.224266   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:33.776145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:33.790045   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:33.790108   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:33.823471   80857 cri.go:89] found id: ""
	I0717 18:43:33.823495   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.823505   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:33.823512   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:33.823568   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:33.860205   80857 cri.go:89] found id: ""
	I0717 18:43:33.860233   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.860243   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:33.860250   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:33.860298   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:33.895469   80857 cri.go:89] found id: ""
	I0717 18:43:33.895499   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.895509   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:33.895516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:33.895578   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:33.938483   80857 cri.go:89] found id: ""
	I0717 18:43:33.938517   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.938527   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:33.938534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:33.938596   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:33.973265   80857 cri.go:89] found id: ""
	I0717 18:43:33.973293   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.973303   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:33.973309   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:33.973382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:34.012669   80857 cri.go:89] found id: ""
	I0717 18:43:34.012696   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.012704   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:34.012710   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:34.012760   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:34.045522   80857 cri.go:89] found id: ""
	I0717 18:43:34.045547   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.045557   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:34.045564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:34.045636   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:34.082927   80857 cri.go:89] found id: ""
	I0717 18:43:34.082957   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.082968   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:34.082979   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:34.082993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:34.134133   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:34.134168   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:34.146814   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:34.146837   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:34.217050   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:34.217079   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:34.217094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:34.298572   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:34.298610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:35.169868   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:37.170083   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:35.324578   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:37.825006   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:36.096437   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:38.096991   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:36.838187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:36.850888   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:36.850948   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:36.883132   80857 cri.go:89] found id: ""
	I0717 18:43:36.883153   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.883160   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:36.883166   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:36.883209   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:36.918310   80857 cri.go:89] found id: ""
	I0717 18:43:36.918339   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.918348   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:36.918353   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:36.918411   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:36.949794   80857 cri.go:89] found id: ""
	I0717 18:43:36.949818   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.949825   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:36.949831   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:36.949889   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:36.980913   80857 cri.go:89] found id: ""
	I0717 18:43:36.980951   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.980962   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:36.980969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:36.981029   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:37.014295   80857 cri.go:89] found id: ""
	I0717 18:43:37.014322   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.014330   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:37.014336   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:37.014397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:37.048555   80857 cri.go:89] found id: ""
	I0717 18:43:37.048581   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.048589   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:37.048595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:37.048643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:37.080533   80857 cri.go:89] found id: ""
	I0717 18:43:37.080561   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.080571   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:37.080577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:37.080640   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:37.112919   80857 cri.go:89] found id: ""
	I0717 18:43:37.112952   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.112963   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:37.112973   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:37.112987   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:37.165012   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:37.165044   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:37.177860   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:37.177881   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:37.244776   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:37.244806   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:37.244824   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:37.322949   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:37.322976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:39.861056   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:39.884509   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:39.884592   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:39.931317   80857 cri.go:89] found id: ""
	I0717 18:43:39.931341   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.931348   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:39.931354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:39.931410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:39.971571   80857 cri.go:89] found id: ""
	I0717 18:43:39.971615   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.971626   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:39.971634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:39.971692   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:40.003851   80857 cri.go:89] found id: ""
	I0717 18:43:40.003875   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.003883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:40.003891   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:40.003942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:40.040403   80857 cri.go:89] found id: ""
	I0717 18:43:40.040430   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.040440   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:40.040445   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:40.040498   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:39.669960   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.170056   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.325792   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.824332   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.596935   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.597153   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.071893   80857 cri.go:89] found id: ""
	I0717 18:43:40.071919   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.071927   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:40.071932   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:40.071979   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:40.111020   80857 cri.go:89] found id: ""
	I0717 18:43:40.111042   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.111052   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:40.111059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:40.111117   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:40.142872   80857 cri.go:89] found id: ""
	I0717 18:43:40.142899   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.142910   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:40.142917   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:40.142975   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:40.179919   80857 cri.go:89] found id: ""
	I0717 18:43:40.179944   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.179953   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:40.179963   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:40.179980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:40.233033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:40.233075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:40.246272   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:40.246299   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:40.311988   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:40.312014   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:40.312033   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:40.395622   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:40.395658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:42.935843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:42.949893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:42.949957   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:42.982429   80857 cri.go:89] found id: ""
	I0717 18:43:42.982451   80857 logs.go:276] 0 containers: []
	W0717 18:43:42.982459   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:42.982464   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:42.982512   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:43.018637   80857 cri.go:89] found id: ""
	I0717 18:43:43.018659   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.018666   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:43.018672   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:43.018719   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:43.054274   80857 cri.go:89] found id: ""
	I0717 18:43:43.054301   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.054310   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:43.054317   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:43.054368   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:43.093382   80857 cri.go:89] found id: ""
	I0717 18:43:43.093408   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.093418   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:43.093425   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:43.093484   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:43.125830   80857 cri.go:89] found id: ""
	I0717 18:43:43.125862   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.125871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:43.125878   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:43.125936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:43.157110   80857 cri.go:89] found id: ""
	I0717 18:43:43.157138   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.157147   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:43.157154   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:43.157215   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:43.188320   80857 cri.go:89] found id: ""
	I0717 18:43:43.188342   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.188349   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:43.188354   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:43.188400   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:43.220650   80857 cri.go:89] found id: ""
	I0717 18:43:43.220679   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.220686   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:43.220695   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:43.220707   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:43.259320   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:43.259358   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:43.308308   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:43.308346   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:43.321865   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:43.321894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:43.396110   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:43.396135   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:43.396147   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:44.670206   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.169748   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.323427   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.324066   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.096564   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.105605   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.976091   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:45.988956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:45.989015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:46.022277   80857 cri.go:89] found id: ""
	I0717 18:43:46.022307   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.022318   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:46.022325   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:46.022398   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:46.057607   80857 cri.go:89] found id: ""
	I0717 18:43:46.057636   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.057646   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:46.057653   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:46.057712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:46.089275   80857 cri.go:89] found id: ""
	I0717 18:43:46.089304   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.089313   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:46.089321   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:46.089378   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:46.123686   80857 cri.go:89] found id: ""
	I0717 18:43:46.123717   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.123726   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:46.123731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:46.123784   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:46.166600   80857 cri.go:89] found id: ""
	I0717 18:43:46.166628   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.166638   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:46.166645   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:46.166704   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:46.202518   80857 cri.go:89] found id: ""
	I0717 18:43:46.202543   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.202562   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:46.202568   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:46.202612   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:46.234573   80857 cri.go:89] found id: ""
	I0717 18:43:46.234608   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.234620   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:46.234627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:46.234687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:46.265305   80857 cri.go:89] found id: ""
	I0717 18:43:46.265333   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.265343   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:46.265355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:46.265369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:46.342963   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:46.342993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:46.377170   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:46.377208   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:46.429641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:46.429673   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:46.442168   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:46.442195   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:46.516656   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.016877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:49.030308   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:49.030375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:49.062400   80857 cri.go:89] found id: ""
	I0717 18:43:49.062423   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.062430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:49.062435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:49.062486   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:49.097110   80857 cri.go:89] found id: ""
	I0717 18:43:49.097131   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.097137   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:49.097142   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:49.097190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:49.128535   80857 cri.go:89] found id: ""
	I0717 18:43:49.128558   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.128571   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:49.128577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:49.128626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:49.162505   80857 cri.go:89] found id: ""
	I0717 18:43:49.162530   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.162538   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:49.162544   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:49.162594   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:49.194912   80857 cri.go:89] found id: ""
	I0717 18:43:49.194939   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.194950   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:49.194957   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:49.195025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:49.227055   80857 cri.go:89] found id: ""
	I0717 18:43:49.227083   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.227092   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:49.227098   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:49.227147   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:49.259568   80857 cri.go:89] found id: ""
	I0717 18:43:49.259596   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.259607   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:49.259618   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:49.259673   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:49.291700   80857 cri.go:89] found id: ""
	I0717 18:43:49.291727   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.291735   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:49.291744   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:49.291755   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:49.344600   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:49.344636   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:49.357680   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:49.357705   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:49.427160   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.427180   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:49.427192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:49.504151   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:49.504182   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:49.170632   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.170953   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:49.324205   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.823181   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:53.824989   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:49.596298   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.596383   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:54.097260   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:52.041591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:52.054775   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:52.054841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:52.085858   80857 cri.go:89] found id: ""
	I0717 18:43:52.085892   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.085904   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:52.085911   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:52.085961   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:52.124100   80857 cri.go:89] found id: ""
	I0717 18:43:52.124122   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.124130   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:52.124135   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:52.124195   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:52.155056   80857 cri.go:89] found id: ""
	I0717 18:43:52.155079   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.155087   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:52.155093   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:52.155154   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:52.189318   80857 cri.go:89] found id: ""
	I0717 18:43:52.189349   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.189359   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:52.189366   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:52.189430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:52.222960   80857 cri.go:89] found id: ""
	I0717 18:43:52.222988   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.222999   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:52.223006   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:52.223071   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:52.255807   80857 cri.go:89] found id: ""
	I0717 18:43:52.255834   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.255841   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:52.255847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:52.255904   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:52.286596   80857 cri.go:89] found id: ""
	I0717 18:43:52.286628   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.286641   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:52.286648   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:52.286703   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:52.319607   80857 cri.go:89] found id: ""
	I0717 18:43:52.319632   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.319641   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:52.319652   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:52.319666   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:52.371270   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:52.371301   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:52.384771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:52.384803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:52.456408   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:52.456432   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:52.456444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:52.533724   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:52.533759   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:53.171080   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:55.669642   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:56.324311   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:58.823693   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:56.595916   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:58.597526   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:55.072554   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:55.087005   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:55.087086   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:55.123300   80857 cri.go:89] found id: ""
	I0717 18:43:55.123325   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.123331   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:55.123336   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:55.123390   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:55.158476   80857 cri.go:89] found id: ""
	I0717 18:43:55.158502   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.158509   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:55.158515   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:55.158572   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:55.198489   80857 cri.go:89] found id: ""
	I0717 18:43:55.198511   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.198518   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:55.198524   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:55.198567   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:55.230901   80857 cri.go:89] found id: ""
	I0717 18:43:55.230933   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.230943   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:55.230951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:55.231028   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:55.262303   80857 cri.go:89] found id: ""
	I0717 18:43:55.262326   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.262333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:55.262340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:55.262393   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:55.293889   80857 cri.go:89] found id: ""
	I0717 18:43:55.293916   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.293925   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:55.293930   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:55.293983   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:55.325695   80857 cri.go:89] found id: ""
	I0717 18:43:55.325720   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.325727   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:55.325737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:55.325797   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:55.360021   80857 cri.go:89] found id: ""
	I0717 18:43:55.360044   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.360052   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:55.360059   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:55.360075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:55.372088   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:55.372111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:55.442073   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:55.442101   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:55.442116   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:55.521733   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:55.521763   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:55.558914   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:55.558947   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.114001   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:58.126283   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:58.126353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:58.162769   80857 cri.go:89] found id: ""
	I0717 18:43:58.162800   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.162810   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:58.162815   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:58.162862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:58.197359   80857 cri.go:89] found id: ""
	I0717 18:43:58.197386   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.197397   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:58.197404   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:58.197465   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:58.229662   80857 cri.go:89] found id: ""
	I0717 18:43:58.229691   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.229700   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:58.229707   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:58.229766   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:58.261810   80857 cri.go:89] found id: ""
	I0717 18:43:58.261832   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.261838   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:58.261844   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:58.261900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:58.293243   80857 cri.go:89] found id: ""
	I0717 18:43:58.293271   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.293282   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:58.293290   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:58.293353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:58.325689   80857 cri.go:89] found id: ""
	I0717 18:43:58.325714   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.325724   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:58.325731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:58.325785   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:58.357381   80857 cri.go:89] found id: ""
	I0717 18:43:58.357406   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.357416   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:58.357422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:58.357483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:58.389859   80857 cri.go:89] found id: ""
	I0717 18:43:58.389888   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.389900   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:58.389910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:58.389926   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:58.458034   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:58.458058   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:58.458072   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:58.536134   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:58.536164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:58.573808   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:58.573834   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.624956   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:58.624985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:58.170810   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:00.670184   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:02.671370   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:00.824682   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:02.824874   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:01.096294   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:03.096348   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:01.138486   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:01.151547   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:01.151610   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:01.186397   80857 cri.go:89] found id: ""
	I0717 18:44:01.186422   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.186430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:01.186435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:01.186487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:01.220797   80857 cri.go:89] found id: ""
	I0717 18:44:01.220822   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.220830   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:01.220849   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:01.220894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:01.257640   80857 cri.go:89] found id: ""
	I0717 18:44:01.257666   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.257674   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:01.257680   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:01.257727   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:01.295393   80857 cri.go:89] found id: ""
	I0717 18:44:01.295418   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.295425   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:01.295432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:01.295493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:01.327242   80857 cri.go:89] found id: ""
	I0717 18:44:01.327261   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.327268   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:01.327273   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:01.327319   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:01.358559   80857 cri.go:89] found id: ""
	I0717 18:44:01.358586   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.358593   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:01.358599   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:01.358647   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:01.392301   80857 cri.go:89] found id: ""
	I0717 18:44:01.392332   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.392341   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:01.392346   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:01.392407   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:01.424422   80857 cri.go:89] found id: ""
	I0717 18:44:01.424449   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.424457   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:01.424465   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:01.424477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:01.473298   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:01.473332   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:01.487444   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:01.487471   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:01.552548   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:01.552572   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:01.552586   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:01.634203   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:01.634242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:04.175618   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:04.188071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:04.188150   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:04.222149   80857 cri.go:89] found id: ""
	I0717 18:44:04.222173   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.222180   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:04.222185   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:04.222242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:04.257174   80857 cri.go:89] found id: ""
	I0717 18:44:04.257211   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.257223   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:04.257232   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:04.257284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:04.291628   80857 cri.go:89] found id: ""
	I0717 18:44:04.291653   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.291666   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:04.291673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:04.291733   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:04.325935   80857 cri.go:89] found id: ""
	I0717 18:44:04.325964   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.325975   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:04.325982   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:04.326043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:04.356610   80857 cri.go:89] found id: ""
	I0717 18:44:04.356638   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.356648   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:04.356655   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:04.356712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:04.387728   80857 cri.go:89] found id: ""
	I0717 18:44:04.387764   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.387773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:04.387782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:04.387840   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:04.421452   80857 cri.go:89] found id: ""
	I0717 18:44:04.421479   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.421488   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:04.421495   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:04.421555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:04.453111   80857 cri.go:89] found id: ""
	I0717 18:44:04.453139   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.453150   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:04.453161   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:04.453175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:04.506185   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:04.506215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:04.523611   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:04.523638   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:04.591051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:04.591074   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:04.591091   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:04.666603   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:04.666647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:05.169836   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.170112   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:05.324886   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.325488   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:05.096545   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.598131   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.205208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:07.218182   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:07.218236   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:07.254521   80857 cri.go:89] found id: ""
	I0717 18:44:07.254554   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.254565   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:07.254571   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:07.254638   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:07.293622   80857 cri.go:89] found id: ""
	I0717 18:44:07.293650   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.293658   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:07.293663   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:07.293711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:07.331056   80857 cri.go:89] found id: ""
	I0717 18:44:07.331083   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.331091   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:07.331097   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:07.331157   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:07.368445   80857 cri.go:89] found id: ""
	I0717 18:44:07.368476   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.368484   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:07.368491   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:07.368541   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:07.405507   80857 cri.go:89] found id: ""
	I0717 18:44:07.405539   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.405550   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:07.405557   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:07.405617   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:07.444752   80857 cri.go:89] found id: ""
	I0717 18:44:07.444782   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.444792   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:07.444801   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:07.444859   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:07.486976   80857 cri.go:89] found id: ""
	I0717 18:44:07.487006   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.487016   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:07.487024   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:07.487073   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:07.522561   80857 cri.go:89] found id: ""
	I0717 18:44:07.522590   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.522599   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:07.522607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:07.522618   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:07.576350   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:07.576382   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:07.591491   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:07.591517   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:07.659860   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:07.659886   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:07.659902   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:07.743445   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:07.743478   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:09.170601   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:11.170851   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:09.824120   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:11.826838   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:10.097009   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:12.596778   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:10.284468   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:10.296549   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:10.296608   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:10.331209   80857 cri.go:89] found id: ""
	I0717 18:44:10.331236   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.331246   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:10.331252   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:10.331297   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:10.363911   80857 cri.go:89] found id: ""
	I0717 18:44:10.363941   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.363949   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:10.363954   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:10.364001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:10.395935   80857 cri.go:89] found id: ""
	I0717 18:44:10.395960   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.395970   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:10.395977   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:10.396021   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:10.428307   80857 cri.go:89] found id: ""
	I0717 18:44:10.428337   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.428344   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:10.428351   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:10.428397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:10.459615   80857 cri.go:89] found id: ""
	I0717 18:44:10.459643   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.459654   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:10.459661   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:10.459715   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:10.491593   80857 cri.go:89] found id: ""
	I0717 18:44:10.491617   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.491628   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:10.491636   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:10.491693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:10.526822   80857 cri.go:89] found id: ""
	I0717 18:44:10.526846   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.526853   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:10.526858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:10.526918   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:10.561037   80857 cri.go:89] found id: ""
	I0717 18:44:10.561066   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.561077   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:10.561087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:10.561101   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:10.643333   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:10.643364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:10.684673   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:10.684704   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:10.736191   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:10.736220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:10.748762   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:10.748793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:10.812121   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.313033   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:13.325692   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:13.325756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:13.358306   80857 cri.go:89] found id: ""
	I0717 18:44:13.358336   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.358345   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:13.358352   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:13.358410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:13.393233   80857 cri.go:89] found id: ""
	I0717 18:44:13.393264   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.393274   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:13.393282   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:13.393340   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:13.424256   80857 cri.go:89] found id: ""
	I0717 18:44:13.424287   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.424298   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:13.424305   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:13.424358   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:13.454988   80857 cri.go:89] found id: ""
	I0717 18:44:13.455010   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.455018   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:13.455023   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:13.455069   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:13.491019   80857 cri.go:89] found id: ""
	I0717 18:44:13.491046   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.491054   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:13.491060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:13.491107   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:13.523045   80857 cri.go:89] found id: ""
	I0717 18:44:13.523070   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.523079   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:13.523085   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:13.523131   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:13.555442   80857 cri.go:89] found id: ""
	I0717 18:44:13.555470   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.555483   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:13.555489   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:13.555549   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:13.588891   80857 cri.go:89] found id: ""
	I0717 18:44:13.588921   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.588931   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:13.588958   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:13.588973   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:13.663635   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.663659   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:13.663674   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:13.749098   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:13.749135   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:13.785489   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:13.785524   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:13.837098   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:13.837128   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:13.671215   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:15.671282   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:17.671466   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:14.324573   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:16.826063   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:15.095967   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:17.096403   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:19.096478   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:16.350571   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:16.364398   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:16.364470   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:16.400677   80857 cri.go:89] found id: ""
	I0717 18:44:16.400708   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.400719   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:16.400726   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:16.400781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:16.431715   80857 cri.go:89] found id: ""
	I0717 18:44:16.431743   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.431754   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:16.431760   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:16.431836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:16.465115   80857 cri.go:89] found id: ""
	I0717 18:44:16.465148   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.465160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:16.465167   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:16.465230   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:16.497906   80857 cri.go:89] found id: ""
	I0717 18:44:16.497933   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.497944   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:16.497952   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:16.498008   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:16.534066   80857 cri.go:89] found id: ""
	I0717 18:44:16.534097   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.534108   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:16.534116   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:16.534173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:16.566679   80857 cri.go:89] found id: ""
	I0717 18:44:16.566706   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.566717   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:16.566724   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:16.566781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:16.598397   80857 cri.go:89] found id: ""
	I0717 18:44:16.598416   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.598422   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:16.598427   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:16.598480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:16.629943   80857 cri.go:89] found id: ""
	I0717 18:44:16.629975   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.629998   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:16.630017   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:16.630032   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:16.706452   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:16.706489   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:16.744971   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:16.745003   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:16.796450   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:16.796477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:16.809192   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:16.809217   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:16.875699   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.376821   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:19.389921   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:19.389980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:19.423837   80857 cri.go:89] found id: ""
	I0717 18:44:19.423862   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.423870   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:19.423877   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:19.423934   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:19.468267   80857 cri.go:89] found id: ""
	I0717 18:44:19.468293   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.468305   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:19.468311   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:19.468371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:19.503286   80857 cri.go:89] found id: ""
	I0717 18:44:19.503315   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.503326   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:19.503333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:19.503391   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:19.535505   80857 cri.go:89] found id: ""
	I0717 18:44:19.535531   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.535542   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:19.535548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:19.535607   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:19.568678   80857 cri.go:89] found id: ""
	I0717 18:44:19.568704   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.568711   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:19.568717   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:19.568762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:19.604027   80857 cri.go:89] found id: ""
	I0717 18:44:19.604053   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.604064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:19.604071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:19.604127   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:19.637357   80857 cri.go:89] found id: ""
	I0717 18:44:19.637387   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.637397   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:19.637403   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:19.637450   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:19.669094   80857 cri.go:89] found id: ""
	I0717 18:44:19.669126   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.669136   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:19.669145   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:19.669160   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:19.720218   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:19.720248   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:19.733320   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:19.733343   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:19.796229   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.796252   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:19.796267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:19.871157   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:19.871186   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:20.170824   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:22.670239   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:19.324037   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:21.324408   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:23.824030   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:21.098734   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:23.595859   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:22.409012   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:22.421477   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:22.421546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:22.457314   80857 cri.go:89] found id: ""
	I0717 18:44:22.457337   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.457346   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:22.457354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:22.457410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:22.490998   80857 cri.go:89] found id: ""
	I0717 18:44:22.491022   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.491030   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:22.491037   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:22.491090   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:22.523904   80857 cri.go:89] found id: ""
	I0717 18:44:22.523934   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.523945   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:22.523953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:22.524012   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:22.555917   80857 cri.go:89] found id: ""
	I0717 18:44:22.555947   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.555956   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:22.555962   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:22.556026   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:22.588510   80857 cri.go:89] found id: ""
	I0717 18:44:22.588552   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.588565   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:22.588574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:22.588652   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:22.621854   80857 cri.go:89] found id: ""
	I0717 18:44:22.621883   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.621893   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:22.621901   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:22.621956   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:22.653897   80857 cri.go:89] found id: ""
	I0717 18:44:22.653921   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.653931   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:22.653938   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:22.654001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:22.685731   80857 cri.go:89] found id: ""
	I0717 18:44:22.685760   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.685770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:22.685779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:22.685792   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:22.735514   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:22.735545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:22.748148   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:22.748169   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:22.809637   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:22.809666   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:22.809682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:22.886014   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:22.886050   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:24.670825   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:27.169930   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.824694   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:28.324620   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.597423   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:28.095788   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.431906   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:25.444866   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:25.444965   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:25.477211   80857 cri.go:89] found id: ""
	I0717 18:44:25.477245   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.477257   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:25.477264   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:25.477366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:25.512077   80857 cri.go:89] found id: ""
	I0717 18:44:25.512108   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.512120   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:25.512127   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:25.512177   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:25.543953   80857 cri.go:89] found id: ""
	I0717 18:44:25.543974   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.543981   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:25.543987   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:25.544032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:25.574955   80857 cri.go:89] found id: ""
	I0717 18:44:25.574980   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.574990   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:25.574997   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:25.575054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:25.607078   80857 cri.go:89] found id: ""
	I0717 18:44:25.607106   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.607117   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:25.607125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:25.607188   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:25.643129   80857 cri.go:89] found id: ""
	I0717 18:44:25.643152   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.643162   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:25.643169   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:25.643225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:25.678220   80857 cri.go:89] found id: ""
	I0717 18:44:25.678241   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.678249   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:25.678254   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:25.678309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:25.715405   80857 cri.go:89] found id: ""
	I0717 18:44:25.715433   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.715446   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:25.715458   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:25.715474   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:25.772978   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:25.773008   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:25.786559   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:25.786587   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:25.853369   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:25.853386   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:25.853398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:25.954346   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:25.954398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:28.498591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:28.511701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:28.511762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:28.543527   80857 cri.go:89] found id: ""
	I0717 18:44:28.543551   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.543559   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:28.543565   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:28.543624   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:28.574737   80857 cri.go:89] found id: ""
	I0717 18:44:28.574762   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.574769   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:28.574776   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:28.574835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:28.608129   80857 cri.go:89] found id: ""
	I0717 18:44:28.608166   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.608174   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:28.608179   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:28.608234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:28.644324   80857 cri.go:89] found id: ""
	I0717 18:44:28.644348   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.644357   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:28.644371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:28.644426   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:28.675830   80857 cri.go:89] found id: ""
	I0717 18:44:28.675859   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.675870   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:28.675877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:28.675937   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:28.705713   80857 cri.go:89] found id: ""
	I0717 18:44:28.705749   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.705760   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:28.705768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:28.705821   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:28.738648   80857 cri.go:89] found id: ""
	I0717 18:44:28.738677   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.738688   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:28.738695   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:28.738752   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:28.768877   80857 cri.go:89] found id: ""
	I0717 18:44:28.768906   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.768916   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:28.768927   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:28.768953   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:28.818951   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:28.818985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:28.832813   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:28.832843   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:28.910030   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:28.910051   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:28.910063   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:28.986706   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:28.986743   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:29.170559   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:31.669543   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:30.824906   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:33.324261   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:30.096916   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:32.597522   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:31.529154   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:31.543261   80857 kubeadm.go:597] duration metric: took 4m4.346231712s to restartPrimaryControlPlane
	W0717 18:44:31.543327   80857 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:44:31.543350   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:44:33.670602   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:36.169669   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:35.325082   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:37.824371   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:35.096445   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:37.097375   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:39.098005   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:36.752008   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.208633612s)
	I0717 18:44:36.752076   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:44:36.765411   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:44:36.774556   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:44:36.783406   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:44:36.783427   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:44:36.783479   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:44:36.791953   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:44:36.792007   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:44:36.800929   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:44:36.808988   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:44:36.809049   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:44:36.817312   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.825586   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:44:36.825648   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.834783   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:44:36.843109   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:44:36.843166   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:44:36.852276   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:44:37.058251   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:44:38.170695   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.671193   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.324181   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.818959   80401 pod_ready.go:81] duration metric: took 4m0.000961975s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" ...
	E0717 18:44:40.818998   80401 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:44:40.819017   80401 pod_ready.go:38] duration metric: took 4m12.045669741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:44:40.819042   80401 kubeadm.go:597] duration metric: took 4m22.276381575s to restartPrimaryControlPlane
	W0717 18:44:40.819091   80401 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:44:40.819116   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:44:41.597013   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:44.097096   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:43.170145   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:45.670626   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:46.595570   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:48.598459   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:48.169822   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:50.170686   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:52.670255   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:51.097591   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:53.597467   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:55.170853   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:57.670157   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:56.096506   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:58.107493   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:00.170210   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:02.672286   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:00.596747   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:02.590517   81068 pod_ready.go:81] duration metric: took 4m0.000120095s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" ...
	E0717 18:45:02.590549   81068 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:45:02.590572   81068 pod_ready.go:38] duration metric: took 4m10.536894511s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:02.590607   81068 kubeadm.go:597] duration metric: took 4m18.045314131s to restartPrimaryControlPlane
	W0717 18:45:02.590672   81068 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:45:02.590702   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:45:06.920900   80401 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.10175503s)
	I0717 18:45:06.921009   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:06.952090   80401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:06.962820   80401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:06.979545   80401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:06.979577   80401 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:06.979641   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:45:06.990493   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:06.990574   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:07.014934   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:45:07.024381   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:07.024449   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:07.033573   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:45:07.042495   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:07.042552   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:07.051233   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:45:07.059616   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:07.059674   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:07.068348   80401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:07.112042   80401 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 18:45:07.112188   80401 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:45:07.229262   80401 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:45:07.229356   80401 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:45:07.229491   80401 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 18:45:07.239251   80401 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:45:05.171753   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:07.669753   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:07.241949   80401 out.go:204]   - Generating certificates and keys ...
	I0717 18:45:07.242054   80401 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:45:07.242150   80401 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:45:07.242253   80401 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:45:07.242355   80401 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:45:07.242459   80401 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:45:07.242536   80401 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:45:07.242620   80401 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:45:07.242721   80401 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:45:07.242835   80401 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:45:07.242937   80401 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:45:07.242998   80401 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:45:07.243068   80401 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:45:07.641462   80401 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:45:07.705768   80401 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:45:07.821102   80401 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:45:07.898702   80401 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:45:08.107470   80401 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:45:08.107945   80401 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:45:08.111615   80401 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:45:08.113464   80401 out.go:204]   - Booting up control plane ...
	I0717 18:45:08.113572   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:45:08.113695   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:45:08.113843   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:45:08.131411   80401 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:45:08.137563   80401 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:45:08.137622   80401 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:45:08.268403   80401 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:45:08.268519   80401 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:45:08.769158   80401 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.386396ms
	I0717 18:45:08.769265   80401 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:45:09.669968   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:11.670466   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:13.771873   80401 kubeadm.go:310] [api-check] The API server is healthy after 5.002458706s
	I0717 18:45:13.789581   80401 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:45:13.804268   80401 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:45:13.831438   80401 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:45:13.831641   80401 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-066175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:45:13.845165   80401 kubeadm.go:310] [bootstrap-token] Using token: fscs12.0o2n9pl0vxdw75m1
	I0717 18:45:13.846851   80401 out.go:204]   - Configuring RBAC rules ...
	I0717 18:45:13.847002   80401 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:45:13.854788   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:45:13.866828   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:45:13.871541   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:45:13.875508   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:45:13.880068   80401 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:45:14.179824   80401 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:45:14.669946   80401 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:45:15.180053   80401 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:45:15.180076   80401 kubeadm.go:310] 
	I0717 18:45:15.180180   80401 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:45:15.180201   80401 kubeadm.go:310] 
	I0717 18:45:15.180287   80401 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:45:15.180295   80401 kubeadm.go:310] 
	I0717 18:45:15.180348   80401 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:45:15.180437   80401 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:45:15.180517   80401 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:45:15.180530   80401 kubeadm.go:310] 
	I0717 18:45:15.180607   80401 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:45:15.180617   80401 kubeadm.go:310] 
	I0717 18:45:15.180682   80401 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:45:15.180692   80401 kubeadm.go:310] 
	I0717 18:45:15.180775   80401 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:45:15.180871   80401 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:45:15.180984   80401 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:45:15.180996   80401 kubeadm.go:310] 
	I0717 18:45:15.181107   80401 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:45:15.181221   80401 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:45:15.181234   80401 kubeadm.go:310] 
	I0717 18:45:15.181370   80401 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fscs12.0o2n9pl0vxdw75m1 \
	I0717 18:45:15.181523   80401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:45:15.181571   80401 kubeadm.go:310] 	--control-plane 
	I0717 18:45:15.181579   80401 kubeadm.go:310] 
	I0717 18:45:15.181679   80401 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:45:15.181690   80401 kubeadm.go:310] 
	I0717 18:45:15.181802   80401 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fscs12.0o2n9pl0vxdw75m1 \
	I0717 18:45:15.181954   80401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:45:15.182460   80401 kubeadm.go:310] W0717 18:45:07.084606    2905 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:45:15.182848   80401 kubeadm.go:310] W0717 18:45:07.085710    2905 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:45:15.183017   80401 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:15.183038   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:45:15.183048   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:45:15.185022   80401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:45:13.671267   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:15.671682   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:15.186444   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:45:15.197514   80401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:45:15.216000   80401 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:45:15.216097   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:15.216157   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-066175 minikube.k8s.io/updated_at=2024_07_17T18_45_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=no-preload-066175 minikube.k8s.io/primary=true
	I0717 18:45:15.251049   80401 ops.go:34] apiserver oom_adj: -16
	I0717 18:45:15.383234   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:15.884265   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:16.384075   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:16.883375   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:17.383864   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:17.884072   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:18.383283   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:18.883644   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:19.384366   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:19.507413   80401 kubeadm.go:1113] duration metric: took 4.291369352s to wait for elevateKubeSystemPrivileges
	I0717 18:45:19.507450   80401 kubeadm.go:394] duration metric: took 5m1.019320853s to StartCluster
	I0717 18:45:19.507473   80401 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:19.507570   80401 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:45:19.510004   80401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:19.510329   80401 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:45:19.510401   80401 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:45:19.510484   80401 addons.go:69] Setting storage-provisioner=true in profile "no-preload-066175"
	I0717 18:45:19.510515   80401 addons.go:234] Setting addon storage-provisioner=true in "no-preload-066175"
	W0717 18:45:19.510523   80401 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:45:19.510530   80401 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:45:19.510531   80401 addons.go:69] Setting default-storageclass=true in profile "no-preload-066175"
	I0717 18:45:19.510553   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.510551   80401 addons.go:69] Setting metrics-server=true in profile "no-preload-066175"
	I0717 18:45:19.510572   80401 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-066175"
	I0717 18:45:19.510586   80401 addons.go:234] Setting addon metrics-server=true in "no-preload-066175"
	W0717 18:45:19.510596   80401 addons.go:243] addon metrics-server should already be in state true
	I0717 18:45:19.510628   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.510986   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.510986   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.511027   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.511047   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.511075   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.511102   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.512057   80401 out.go:177] * Verifying Kubernetes components...
	I0717 18:45:19.513662   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:45:19.532038   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40719
	I0717 18:45:19.532059   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45825
	I0717 18:45:19.532048   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0717 18:45:19.532557   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.532562   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.532701   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.533086   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533107   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533246   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533261   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533276   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533295   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533455   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533671   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533732   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533851   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.533933   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.533958   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.534280   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.534310   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.537749   80401 addons.go:234] Setting addon default-storageclass=true in "no-preload-066175"
	W0717 18:45:19.537773   80401 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:45:19.537804   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.538168   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.538206   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.550488   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I0717 18:45:19.551013   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.551625   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.551647   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.552005   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.552335   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.553613   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I0717 18:45:19.553633   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0717 18:45:19.554184   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.554243   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.554271   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.554784   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.554801   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.554965   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.554986   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.555220   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.555350   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.555393   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.555995   80401 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:45:19.556103   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.556229   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.556825   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.557482   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:45:19.557499   80401 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:45:19.557517   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.558437   80401 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:45:19.560069   80401 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:19.560084   80401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:45:19.560100   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.560881   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.560908   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.560932   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.561265   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.561477   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.561633   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.561732   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.563601   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.564025   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.564197   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.564219   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.564378   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.564549   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.564686   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.579324   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37271
	I0717 18:45:19.579786   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.580331   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.580354   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.580697   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.580925   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.582700   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.582910   80401 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:19.582923   80401 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:45:19.582936   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.585938   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.586387   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.586414   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.586605   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.586758   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.586920   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.587061   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.706369   80401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:45:19.727936   80401 node_ready.go:35] waiting up to 6m0s for node "no-preload-066175" to be "Ready" ...
	I0717 18:45:19.738822   80401 node_ready.go:49] node "no-preload-066175" has status "Ready":"True"
	I0717 18:45:19.738841   80401 node_ready.go:38] duration metric: took 10.872501ms for node "no-preload-066175" to be "Ready" ...
	I0717 18:45:19.738852   80401 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:19.744979   80401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:19.854180   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:19.873723   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:45:19.873746   80401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:45:19.883867   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:19.902041   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:45:19.902064   80401 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:45:19.926788   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:19.926867   80401 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:45:19.953788   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:20.571091   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.571119   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571119   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.571137   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571394   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.571439   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.571456   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.571463   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.571459   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.572575   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571494   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.572789   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.572761   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.572804   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.572815   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.572824   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.573027   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.573044   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.589595   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.589614   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.589913   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.589940   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.589918   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.789754   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.789776   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.790082   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.790103   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.790113   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.790123   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.790416   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.790457   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.790470   80401 addons.go:475] Verifying addon metrics-server=true in "no-preload-066175"
	I0717 18:45:20.790416   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.792175   80401 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 18:45:18.169876   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:20.170261   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:22.664656   80180 pod_ready.go:81] duration metric: took 4m0.000669682s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" ...
	E0717 18:45:22.664696   80180 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:45:22.664716   80180 pod_ready.go:38] duration metric: took 4m9.027997903s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:22.664746   80180 kubeadm.go:597] duration metric: took 4m19.955287366s to restartPrimaryControlPlane
	W0717 18:45:22.664823   80180 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:45:22.664854   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:45:20.793543   80401 addons.go:510] duration metric: took 1.283145408s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 18:45:21.766367   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:24.252243   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:24.771415   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:24.771443   80401 pod_ready.go:81] duration metric: took 5.026437249s for pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:24.771457   80401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:26.777371   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:28.778629   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:31.277550   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:31.792126   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.792154   80401 pod_ready.go:81] duration metric: took 7.020687724s for pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.792168   80401 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.798687   80401 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.798708   80401 pod_ready.go:81] duration metric: took 6.534344ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.798717   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.803428   80401 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.803452   80401 pod_ready.go:81] duration metric: took 4.727536ms for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.803464   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.815053   80401 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.815078   80401 pod_ready.go:81] duration metric: took 11.60679ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.815092   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rgp5c" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.824126   80401 pod_ready.go:92] pod "kube-proxy-rgp5c" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.824151   80401 pod_ready.go:81] duration metric: took 9.050394ms for pod "kube-proxy-rgp5c" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.824163   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:32.176378   80401 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:32.176404   80401 pod_ready.go:81] duration metric: took 352.232802ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:32.176414   80401 pod_ready.go:38] duration metric: took 12.437548785s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:32.176430   80401 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:45:32.176492   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:45:32.190918   80401 api_server.go:72] duration metric: took 12.680546008s to wait for apiserver process to appear ...
	I0717 18:45:32.190942   80401 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:45:32.190963   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:45:32.196011   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:45:32.197004   80401 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:45:32.197024   80401 api_server.go:131] duration metric: took 6.075734ms to wait for apiserver health ...
	I0717 18:45:32.197033   80401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:45:32.379383   80401 system_pods.go:59] 9 kube-system pods found
	I0717 18:45:32.379412   80401 system_pods.go:61] "coredns-5cfdc65f69-r9xns" [29624b73-848d-4a35-96bc-92f9627842fe] Running
	I0717 18:45:32.379416   80401 system_pods.go:61] "coredns-5cfdc65f69-tx7nc" [085ec394-1ca7-4b9b-9b54-b4fdab45bd75] Running
	I0717 18:45:32.379420   80401 system_pods.go:61] "etcd-no-preload-066175" [6086cbd0-137f-428e-8131-4d57b8823912] Running
	I0717 18:45:32.379423   80401 system_pods.go:61] "kube-apiserver-no-preload-066175" [c1913fea-3c1b-4563-ac80-ee1224b23a35] Running
	I0717 18:45:32.379427   80401 system_pods.go:61] "kube-controller-manager-no-preload-066175" [f6dd2ea0-be8f-4c8c-89b0-57fed0d618fd] Running
	I0717 18:45:32.379431   80401 system_pods.go:61] "kube-proxy-rgp5c" [7aaedb8f-b248-43ac-bd49-4f97d26aa1f6] Running
	I0717 18:45:32.379433   80401 system_pods.go:61] "kube-scheduler-no-preload-066175" [406fae53-d382-42c0-90db-ff9c57ccda8b] Running
	I0717 18:45:32.379439   80401 system_pods.go:61] "metrics-server-78fcd8795b-kj29z" [4b99bc9f-b5a7-4e86-b3ba-2607f9840957] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:32.379442   80401 system_pods.go:61] "storage-provisioner" [c9730cf9-c0f1-4afc-94cc-cbd825158d7c] Running
	I0717 18:45:32.379450   80401 system_pods.go:74] duration metric: took 182.412193ms to wait for pod list to return data ...
	I0717 18:45:32.379456   80401 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:45:32.576324   80401 default_sa.go:45] found service account: "default"
	I0717 18:45:32.576348   80401 default_sa.go:55] duration metric: took 196.886306ms for default service account to be created ...
	I0717 18:45:32.576357   80401 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:45:32.780237   80401 system_pods.go:86] 9 kube-system pods found
	I0717 18:45:32.780266   80401 system_pods.go:89] "coredns-5cfdc65f69-r9xns" [29624b73-848d-4a35-96bc-92f9627842fe] Running
	I0717 18:45:32.780272   80401 system_pods.go:89] "coredns-5cfdc65f69-tx7nc" [085ec394-1ca7-4b9b-9b54-b4fdab45bd75] Running
	I0717 18:45:32.780276   80401 system_pods.go:89] "etcd-no-preload-066175" [6086cbd0-137f-428e-8131-4d57b8823912] Running
	I0717 18:45:32.780280   80401 system_pods.go:89] "kube-apiserver-no-preload-066175" [c1913fea-3c1b-4563-ac80-ee1224b23a35] Running
	I0717 18:45:32.780284   80401 system_pods.go:89] "kube-controller-manager-no-preload-066175" [f6dd2ea0-be8f-4c8c-89b0-57fed0d618fd] Running
	I0717 18:45:32.780288   80401 system_pods.go:89] "kube-proxy-rgp5c" [7aaedb8f-b248-43ac-bd49-4f97d26aa1f6] Running
	I0717 18:45:32.780291   80401 system_pods.go:89] "kube-scheduler-no-preload-066175" [406fae53-d382-42c0-90db-ff9c57ccda8b] Running
	I0717 18:45:32.780298   80401 system_pods.go:89] "metrics-server-78fcd8795b-kj29z" [4b99bc9f-b5a7-4e86-b3ba-2607f9840957] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:32.780302   80401 system_pods.go:89] "storage-provisioner" [c9730cf9-c0f1-4afc-94cc-cbd825158d7c] Running
	I0717 18:45:32.780314   80401 system_pods.go:126] duration metric: took 203.948509ms to wait for k8s-apps to be running ...
	I0717 18:45:32.780323   80401 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:45:32.780368   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:32.796763   80401 system_svc.go:56] duration metric: took 16.430293ms WaitForService to wait for kubelet
	I0717 18:45:32.796791   80401 kubeadm.go:582] duration metric: took 13.286425468s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:45:32.796809   80401 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:45:32.977271   80401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:45:32.977295   80401 node_conditions.go:123] node cpu capacity is 2
	I0717 18:45:32.977305   80401 node_conditions.go:105] duration metric: took 180.491938ms to run NodePressure ...
	I0717 18:45:32.977315   80401 start.go:241] waiting for startup goroutines ...
	I0717 18:45:32.977322   80401 start.go:246] waiting for cluster config update ...
	I0717 18:45:32.977331   80401 start.go:255] writing updated cluster config ...
	I0717 18:45:32.977544   80401 ssh_runner.go:195] Run: rm -f paused
	I0717 18:45:33.022678   80401 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 18:45:33.024737   80401 out.go:177] * Done! kubectl is now configured to use "no-preload-066175" cluster and "default" namespace by default
	I0717 18:45:33.625503   81068 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.034773328s)
	I0717 18:45:33.625584   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:33.640151   81068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:33.650198   81068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:33.659027   81068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:33.659048   81068 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:33.659088   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 18:45:33.667607   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:33.667663   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:33.677632   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 18:45:33.685631   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:33.685683   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:33.694068   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 18:45:33.702840   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:33.702894   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:33.711560   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 18:45:33.719883   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:33.719928   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:33.729898   81068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:33.781672   81068 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:45:33.781776   81068 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:45:33.908046   81068 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:45:33.908199   81068 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:45:33.908366   81068 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:45:34.103926   81068 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:45:34.105872   81068 out.go:204]   - Generating certificates and keys ...
	I0717 18:45:34.105979   81068 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:45:34.106063   81068 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:45:34.106183   81068 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:45:34.106425   81068 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:45:34.106542   81068 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:45:34.106624   81068 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:45:34.106729   81068 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:45:34.106827   81068 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:45:34.106901   81068 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:45:34.106984   81068 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:45:34.107046   81068 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:45:34.107142   81068 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:45:34.390326   81068 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:45:34.442610   81068 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:45:34.692719   81068 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:45:34.777644   81068 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:45:35.101349   81068 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:45:35.102039   81068 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:45:35.104892   81068 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:45:35.106561   81068 out.go:204]   - Booting up control plane ...
	I0717 18:45:35.106689   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:45:35.106775   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:45:35.107611   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:45:35.126132   81068 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:45:35.127180   81068 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:45:35.127245   81068 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:45:35.250173   81068 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:45:35.250284   81068 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:45:35.752731   81068 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.583425ms
	I0717 18:45:35.752861   81068 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:45:40.754304   81068 kubeadm.go:310] [api-check] The API server is healthy after 5.001385597s
	I0717 18:45:40.766072   81068 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:45:40.785708   81068 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:45:40.816360   81068 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:45:40.816576   81068 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-022930 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:45:40.830588   81068 kubeadm.go:310] [bootstrap-token] Using token: kxmxsp.4wnt2q9oqhdfdirj
	I0717 18:45:40.831905   81068 out.go:204]   - Configuring RBAC rules ...
	I0717 18:45:40.832031   81068 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:45:40.840754   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:45:40.850104   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:45:40.853748   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:45:40.857341   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:45:40.860783   81068 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:45:41.161978   81068 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:45:41.600410   81068 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:45:42.161763   81068 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:45:42.163450   81068 kubeadm.go:310] 
	I0717 18:45:42.163541   81068 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:45:42.163558   81068 kubeadm.go:310] 
	I0717 18:45:42.163661   81068 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:45:42.163673   81068 kubeadm.go:310] 
	I0717 18:45:42.163707   81068 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:45:42.163797   81068 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:45:42.163870   81068 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:45:42.163881   81068 kubeadm.go:310] 
	I0717 18:45:42.163974   81068 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:45:42.163990   81068 kubeadm.go:310] 
	I0717 18:45:42.164058   81068 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:45:42.164077   81068 kubeadm.go:310] 
	I0717 18:45:42.164151   81068 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:45:42.164256   81068 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:45:42.164367   81068 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:45:42.164377   81068 kubeadm.go:310] 
	I0717 18:45:42.164489   81068 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:45:42.164588   81068 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:45:42.164595   81068 kubeadm.go:310] 
	I0717 18:45:42.164683   81068 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token kxmxsp.4wnt2q9oqhdfdirj \
	I0717 18:45:42.164826   81068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:45:42.164862   81068 kubeadm.go:310] 	--control-plane 
	I0717 18:45:42.164870   81068 kubeadm.go:310] 
	I0717 18:45:42.165002   81068 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:45:42.165012   81068 kubeadm.go:310] 
	I0717 18:45:42.165143   81068 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token kxmxsp.4wnt2q9oqhdfdirj \
	I0717 18:45:42.165257   81068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:45:42.166381   81068 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:42.166436   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:45:42.166456   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:45:42.168387   81068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:45:42.169678   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:45:42.180065   81068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:45:42.197116   81068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:45:42.197192   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:42.197217   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-022930 minikube.k8s.io/updated_at=2024_07_17T18_45_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=default-k8s-diff-port-022930 minikube.k8s.io/primary=true
	I0717 18:45:42.216456   81068 ops.go:34] apiserver oom_adj: -16
	I0717 18:45:42.370148   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:42.870732   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:43.370980   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:43.871201   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:44.370616   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:44.871007   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:45.370377   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:45.870614   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:46.370555   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:46.870513   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:47.370594   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:47.870651   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:48.370620   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:48.870863   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:49.371058   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:49.870188   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:50.370949   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:50.871187   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:51.370764   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:51.871007   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:52.370298   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:52.870917   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:53.371193   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:53.870491   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:54.370274   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:54.871160   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.370879   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.870592   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.948131   81068 kubeadm.go:1113] duration metric: took 13.751000929s to wait for elevateKubeSystemPrivileges
	I0717 18:45:55.948166   81068 kubeadm.go:394] duration metric: took 5m11.453950834s to StartCluster
	I0717 18:45:55.948188   81068 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:55.948265   81068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:45:55.950777   81068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:55.951066   81068 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:45:55.951134   81068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:45:55.951202   81068 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951237   81068 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.951247   81068 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:45:55.951243   81068 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951257   81068 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951293   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:45:55.951300   81068 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.951318   81068 addons.go:243] addon metrics-server should already be in state true
	I0717 18:45:55.951319   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.951348   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.951292   81068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-022930"
	I0717 18:45:55.951712   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951732   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951744   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951754   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.951769   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.951747   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.952885   81068 out.go:177] * Verifying Kubernetes components...
	I0717 18:45:55.954423   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:45:55.968158   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0717 18:45:55.968547   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41199
	I0717 18:45:55.968768   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.968917   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.969414   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.969436   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.969548   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.969566   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.969814   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.970012   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.970235   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:55.970413   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.970462   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.970809   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44281
	I0717 18:45:55.971165   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.974130   81068 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.974155   81068 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:45:55.974184   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.974549   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.974578   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.981608   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.981640   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.982054   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.982711   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.982754   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.990665   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0717 18:45:55.991297   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.991922   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.991938   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.992213   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.992346   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:55.993952   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:55.996135   81068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:45:55.997555   81068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:55.997579   81068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:45:55.997602   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:55.998414   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0717 18:45:55.998963   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.999540   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.999554   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.000799   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I0717 18:45:56.001014   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.001096   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.001419   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:56.001512   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.001527   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.001755   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.001929   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.002102   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.002141   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:56.002178   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:56.002255   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.002686   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:56.002709   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.003047   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.003251   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:56.004660   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:56.006355   81068 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:45:56.007646   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:45:56.007663   81068 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:45:56.007678   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:56.010711   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.011169   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.011220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.011452   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.011637   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.011806   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.011932   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.021277   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0717 18:45:56.021980   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:56.022568   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:56.022585   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.022949   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.023127   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:56.025023   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:56.025443   81068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:56.025458   81068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:45:56.025476   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:56.028095   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.028450   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.028477   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.028666   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.028853   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.029081   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.029226   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.173482   81068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:45:56.194585   81068 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-022930" to be "Ready" ...
	I0717 18:45:56.203594   81068 node_ready.go:49] node "default-k8s-diff-port-022930" has status "Ready":"True"
	I0717 18:45:56.203614   81068 node_ready.go:38] duration metric: took 8.994875ms for node "default-k8s-diff-port-022930" to be "Ready" ...
	I0717 18:45:56.203623   81068 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:56.207834   81068 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.212424   81068 pod_ready.go:92] pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.212444   81068 pod_ready.go:81] duration metric: took 4.58857ms for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.212454   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.217013   81068 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.217031   81068 pod_ready.go:81] duration metric: took 4.569971ms for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.217040   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.221441   81068 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.221458   81068 pod_ready.go:81] duration metric: took 4.411121ms for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.221470   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hnb5v" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.268740   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:45:56.268765   81068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:45:56.290194   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:56.310957   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:45:56.310981   81068 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:45:56.352789   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:56.352821   81068 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:45:56.378402   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:56.379632   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:56.518737   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.518766   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.519075   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.519097   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:56.519108   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.519117   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.519340   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.519352   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.519383   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.519426   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:56.529290   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.529317   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.529618   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.529680   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.529697   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.386401   81068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.007961919s)
	I0717 18:45:57.386463   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.386480   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.386925   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.386980   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.386999   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.387017   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.386958   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.387283   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.387304   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731240   81068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.351571451s)
	I0717 18:45:57.731287   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.731300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.731616   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.731650   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.731664   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731672   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.731685   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.731905   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.731930   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731949   81068 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-022930"
	I0717 18:45:57.731960   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.734601   81068 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 18:45:53.693038   80180 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.028164403s)
	I0717 18:45:53.693099   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:53.709020   80180 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:53.718790   80180 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:53.728384   80180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:53.728405   80180 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:53.728444   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:45:53.737315   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:53.737384   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:53.746336   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:45:53.754297   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:53.754347   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:53.763252   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:45:53.772186   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:53.772229   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:53.780829   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:45:53.788899   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:53.788955   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:53.797324   80180 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:53.982580   80180 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:57.735769   81068 addons.go:510] duration metric: took 1.784634456s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 18:45:57.742312   81068 pod_ready.go:92] pod "kube-proxy-hnb5v" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:57.742333   81068 pod_ready.go:81] duration metric: took 1.520854667s for pod "kube-proxy-hnb5v" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.742344   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.809858   81068 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:57.809885   81068 pod_ready.go:81] duration metric: took 67.527182ms for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.809896   81068 pod_ready.go:38] duration metric: took 1.606263576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:57.809914   81068 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:45:57.809972   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:45:57.847337   81068 api_server.go:72] duration metric: took 1.896234247s to wait for apiserver process to appear ...
	I0717 18:45:57.847366   81068 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:45:57.847391   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:45:57.853537   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 200:
	ok
	I0717 18:45:57.856587   81068 api_server.go:141] control plane version: v1.30.2
	I0717 18:45:57.856661   81068 api_server.go:131] duration metric: took 9.286402ms to wait for apiserver health ...
	I0717 18:45:57.856684   81068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:45:58.002336   81068 system_pods.go:59] 9 kube-system pods found
	I0717 18:45:58.002374   81068 system_pods.go:61] "coredns-7db6d8ff4d-fp4tg" [dc66092c-9183-4630-93cc-6ec4aa59a928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.002383   81068 system_pods.go:61] "coredns-7db6d8ff4d-jn64r" [35cbef26-555a-4693-afac-c739d9238a04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.002396   81068 system_pods.go:61] "etcd-default-k8s-diff-port-022930" [f83fd844-0ede-4638-b8c6-2ecdecbf4345] Running
	I0717 18:45:58.002402   81068 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-022930" [19fa3a0a-ab56-4163-b39f-2b12ce65d490] Running
	I0717 18:45:58.002408   81068 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-022930" [0037b401-ce9b-41f3-89de-47608a46a228] Running
	I0717 18:45:58.002414   81068 system_pods.go:61] "kube-proxy-hnb5v" [b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d] Running
	I0717 18:45:58.002418   81068 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-022930" [21fa54d0-9d90-492c-b90c-e5070dd2e350] Running
	I0717 18:45:58.002425   81068 system_pods.go:61] "metrics-server-569cc877fc-pfmwt" [39616dfc-215e-4af5-90f7-12fc28304494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:58.002435   81068 system_pods.go:61] "storage-provisioner" [d9b11611-2008-4a15-a661-62809bd1d4c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:58.002452   81068 system_pods.go:74] duration metric: took 145.752129ms to wait for pod list to return data ...
	I0717 18:45:58.002463   81068 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:45:58.197223   81068 default_sa.go:45] found service account: "default"
	I0717 18:45:58.197250   81068 default_sa.go:55] duration metric: took 194.774408ms for default service account to be created ...
	I0717 18:45:58.197260   81068 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:45:58.401825   81068 system_pods.go:86] 9 kube-system pods found
	I0717 18:45:58.401878   81068 system_pods.go:89] "coredns-7db6d8ff4d-fp4tg" [dc66092c-9183-4630-93cc-6ec4aa59a928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.401891   81068 system_pods.go:89] "coredns-7db6d8ff4d-jn64r" [35cbef26-555a-4693-afac-c739d9238a04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.401904   81068 system_pods.go:89] "etcd-default-k8s-diff-port-022930" [f83fd844-0ede-4638-b8c6-2ecdecbf4345] Running
	I0717 18:45:58.401917   81068 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-022930" [19fa3a0a-ab56-4163-b39f-2b12ce65d490] Running
	I0717 18:45:58.401927   81068 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-022930" [0037b401-ce9b-41f3-89de-47608a46a228] Running
	I0717 18:45:58.401935   81068 system_pods.go:89] "kube-proxy-hnb5v" [b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d] Running
	I0717 18:45:58.401940   81068 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-022930" [21fa54d0-9d90-492c-b90c-e5070dd2e350] Running
	I0717 18:45:58.401948   81068 system_pods.go:89] "metrics-server-569cc877fc-pfmwt" [39616dfc-215e-4af5-90f7-12fc28304494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:58.401956   81068 system_pods.go:89] "storage-provisioner" [d9b11611-2008-4a15-a661-62809bd1d4c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:58.401965   81068 system_pods.go:126] duration metric: took 204.700297ms to wait for k8s-apps to be running ...
	I0717 18:45:58.401975   81068 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:45:58.402024   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:58.416020   81068 system_svc.go:56] duration metric: took 14.023536ms WaitForService to wait for kubelet
	I0717 18:45:58.416056   81068 kubeadm.go:582] duration metric: took 2.464957357s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:45:58.416079   81068 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:45:58.598829   81068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:45:58.598863   81068 node_conditions.go:123] node cpu capacity is 2
	I0717 18:45:58.598876   81068 node_conditions.go:105] duration metric: took 182.791383ms to run NodePressure ...
	I0717 18:45:58.598891   81068 start.go:241] waiting for startup goroutines ...
	I0717 18:45:58.598899   81068 start.go:246] waiting for cluster config update ...
	I0717 18:45:58.598912   81068 start.go:255] writing updated cluster config ...
	I0717 18:45:58.599267   81068 ssh_runner.go:195] Run: rm -f paused
	I0717 18:45:58.661380   81068 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:45:58.663085   81068 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-022930" cluster and "default" namespace by default
	I0717 18:46:02.558673   80180 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:46:02.558766   80180 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:02.558842   80180 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:02.558980   80180 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:02.559118   80180 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:02.559210   80180 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:02.561934   80180 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:02.562036   80180 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:02.562108   80180 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:02.562191   80180 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:02.562290   80180 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:02.562393   80180 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:02.562478   80180 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:02.562565   80180 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:02.562643   80180 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:02.562711   80180 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:02.562826   80180 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:02.562886   80180 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:02.562958   80180 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:02.563005   80180 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:02.563081   80180 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:46:02.563136   80180 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:02.563210   80180 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:02.563293   80180 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:02.563405   80180 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:02.563468   80180 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:02.564989   80180 out.go:204]   - Booting up control plane ...
	I0717 18:46:02.565092   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:02.565181   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:02.565270   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:02.565400   80180 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:02.565526   80180 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:02.565597   80180 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:02.565783   80180 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:46:02.565880   80180 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:46:02.565959   80180 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.323304ms
	I0717 18:46:02.566046   80180 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:46:02.566105   80180 kubeadm.go:310] [api-check] The API server is healthy after 5.002038309s
	I0717 18:46:02.566206   80180 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:46:02.566307   80180 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:46:02.566359   80180 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:46:02.566525   80180 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-527415 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:46:02.566575   80180 kubeadm.go:310] [bootstrap-token] Using token: xeax16.7z40teb0jswemrgg
	I0717 18:46:02.568038   80180 out.go:204]   - Configuring RBAC rules ...
	I0717 18:46:02.568120   80180 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:46:02.568194   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:46:02.568314   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:46:02.568449   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:46:02.568553   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:46:02.568660   80180 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:46:02.568807   80180 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:46:02.568877   80180 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:46:02.568926   80180 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:46:02.568936   80180 kubeadm.go:310] 
	I0717 18:46:02.569032   80180 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:46:02.569044   80180 kubeadm.go:310] 
	I0717 18:46:02.569108   80180 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:46:02.569114   80180 kubeadm.go:310] 
	I0717 18:46:02.569157   80180 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:46:02.569249   80180 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:46:02.569326   80180 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:46:02.569346   80180 kubeadm.go:310] 
	I0717 18:46:02.569432   80180 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:46:02.569442   80180 kubeadm.go:310] 
	I0717 18:46:02.569511   80180 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:46:02.569519   80180 kubeadm.go:310] 
	I0717 18:46:02.569599   80180 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:46:02.569695   80180 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:46:02.569790   80180 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:46:02.569797   80180 kubeadm.go:310] 
	I0717 18:46:02.569905   80180 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:46:02.569985   80180 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:46:02.569998   80180 kubeadm.go:310] 
	I0717 18:46:02.570096   80180 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xeax16.7z40teb0jswemrgg \
	I0717 18:46:02.570234   80180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:46:02.570264   80180 kubeadm.go:310] 	--control-plane 
	I0717 18:46:02.570273   80180 kubeadm.go:310] 
	I0717 18:46:02.570348   80180 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:46:02.570355   80180 kubeadm.go:310] 
	I0717 18:46:02.570429   80180 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xeax16.7z40teb0jswemrgg \
	I0717 18:46:02.570555   80180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:46:02.570569   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:46:02.570578   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:46:02.571934   80180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:46:02.573034   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:46:02.583253   80180 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:46:02.603658   80180 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:46:02.603745   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-527415 minikube.k8s.io/updated_at=2024_07_17T18_46_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=embed-certs-527415 minikube.k8s.io/primary=true
	I0717 18:46:02.603745   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:02.621414   80180 ops.go:34] apiserver oom_adj: -16
	I0717 18:46:02.792226   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:03.292632   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:03.792270   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:04.293220   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:04.793011   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:05.292596   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:05.793043   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:06.293286   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:06.793069   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:07.292569   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:07.792604   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:08.293028   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:08.792259   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:09.292273   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:09.792672   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.293080   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.792442   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.292894   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.792436   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.292411   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.792327   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.292909   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.792878   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.293188   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.793038   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.292453   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.792367   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.898487   80180 kubeadm.go:1113] duration metric: took 13.294815165s to wait for elevateKubeSystemPrivileges
	I0717 18:46:15.898528   80180 kubeadm.go:394] duration metric: took 5m13.234208822s to StartCluster
	I0717 18:46:15.898546   80180 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:15.898626   80180 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:46:15.900239   80180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:15.900462   80180 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:46:15.900564   80180 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:46:15.900648   80180 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:46:15.900655   80180 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-527415"
	I0717 18:46:15.900667   80180 addons.go:69] Setting default-storageclass=true in profile "embed-certs-527415"
	I0717 18:46:15.900691   80180 addons.go:69] Setting metrics-server=true in profile "embed-certs-527415"
	I0717 18:46:15.900704   80180 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-527415"
	I0717 18:46:15.900709   80180 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-527415"
	I0717 18:46:15.900714   80180 addons.go:234] Setting addon metrics-server=true in "embed-certs-527415"
	W0717 18:46:15.900747   80180 addons.go:243] addon metrics-server should already be in state true
	I0717 18:46:15.900777   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	W0717 18:46:15.900715   80180 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:46:15.900852   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:46:15.901106   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901150   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.901152   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901183   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.901264   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901298   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.902177   80180 out.go:177] * Verifying Kubernetes components...
	I0717 18:46:15.903698   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:46:15.918294   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40333
	I0717 18:46:15.918295   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I0717 18:46:15.918859   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.918909   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.919433   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.919455   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.919478   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40379
	I0717 18:46:15.919548   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.919572   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.919788   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.919875   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.919883   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.920316   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.920323   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.920338   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.920345   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.920387   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.920425   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.920695   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.920890   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.924623   80180 addons.go:234] Setting addon default-storageclass=true in "embed-certs-527415"
	W0717 18:46:15.924644   80180 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:46:15.924672   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:46:15.925801   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.925830   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.936020   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
	I0717 18:46:15.936280   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0717 18:46:15.936365   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.936674   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.937144   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.937164   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.937229   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.937239   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.937565   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.937587   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.937770   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.937872   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.939671   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.939856   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.941929   80180 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:46:15.941934   80180 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:46:15.943632   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:46:15.943650   80180 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:46:15.943668   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.943715   80180 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:15.943724   80180 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:46:15.943737   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.946283   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33675
	I0717 18:46:15.946815   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.947230   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.947240   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.947272   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.947953   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.947987   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948001   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.948179   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.948223   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948248   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.948388   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.948604   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:15.948627   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.948653   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948832   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.948870   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.948895   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.949086   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.949307   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.949454   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:15.969385   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0717 18:46:15.969789   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.970221   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.970241   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.970756   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.970963   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.972631   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.972849   80180 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:15.972868   80180 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:46:15.972889   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.975680   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.976123   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.976187   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.976320   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.976496   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.976657   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.976748   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:16.134605   80180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:46:16.206139   80180 node_ready.go:35] waiting up to 6m0s for node "embed-certs-527415" to be "Ready" ...
	I0717 18:46:16.214532   80180 node_ready.go:49] node "embed-certs-527415" has status "Ready":"True"
	I0717 18:46:16.214550   80180 node_ready.go:38] duration metric: took 8.382109ms for node "embed-certs-527415" to be "Ready" ...
	I0717 18:46:16.214568   80180 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:46:16.223573   80180 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:16.254146   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:46:16.254166   80180 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:46:16.293257   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:16.312304   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:16.334927   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:46:16.334949   80180 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:46:16.404696   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:16.404723   80180 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:46:16.462835   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:17.281062   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281088   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281062   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281157   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281395   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281402   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281415   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281415   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281424   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281427   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281432   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281436   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281676   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.281678   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281700   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281705   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.281722   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281732   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.300264   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.300294   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.300592   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.300643   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.300672   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.489477   80180 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026593042s)
	I0717 18:46:17.489520   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.489534   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.490020   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.490047   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.490055   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.490068   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.490077   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.490344   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.490373   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.490384   80180 addons.go:475] Verifying addon metrics-server=true in "embed-certs-527415"
	I0717 18:46:17.490397   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.492257   80180 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 18:46:17.493487   80180 addons.go:510] duration metric: took 1.592928152s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 18:46:18.230569   80180 pod_ready.go:92] pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.230592   80180 pod_ready.go:81] duration metric: took 2.006995421s for pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.230603   80180 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.235298   80180 pod_ready.go:92] pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.235317   80180 pod_ready.go:81] duration metric: took 4.707534ms for pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.235327   80180 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.238998   80180 pod_ready.go:92] pod "etcd-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.239015   80180 pod_ready.go:81] duration metric: took 3.681191ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.239023   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.242949   80180 pod_ready.go:92] pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.242967   80180 pod_ready.go:81] duration metric: took 3.937614ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.242977   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.246567   80180 pod_ready.go:92] pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.246580   80180 pod_ready.go:81] duration metric: took 3.597434ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.246588   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m52fq" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.628607   80180 pod_ready.go:92] pod "kube-proxy-m52fq" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.628636   80180 pod_ready.go:81] duration metric: took 382.042151ms for pod "kube-proxy-m52fq" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.628650   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:19.028536   80180 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:19.028558   80180 pod_ready.go:81] duration metric: took 399.900565ms for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:19.028565   80180 pod_ready.go:38] duration metric: took 2.813989212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:46:19.028578   80180 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:46:19.028630   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:46:19.044787   80180 api_server.go:72] duration metric: took 3.144295616s to wait for apiserver process to appear ...
	I0717 18:46:19.044810   80180 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:46:19.044825   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:46:19.051106   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:46:19.052094   80180 api_server.go:141] control plane version: v1.30.2
	I0717 18:46:19.052111   80180 api_server.go:131] duration metric: took 7.296406ms to wait for apiserver health ...
	I0717 18:46:19.052117   80180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:46:19.231877   80180 system_pods.go:59] 9 kube-system pods found
	I0717 18:46:19.231905   80180 system_pods.go:61] "coredns-7db6d8ff4d-2zt8k" [5e2e90bb-5721-4ca8-8177-77e6b686175a] Running
	I0717 18:46:19.231912   80180 system_pods.go:61] "coredns-7db6d8ff4d-f64kh" [f0de6ef4-1402-44b2-81f3-3f234a72d151] Running
	I0717 18:46:19.231916   80180 system_pods.go:61] "etcd-embed-certs-527415" [79d210fe-c4d9-476f-ab78-cce3b98c1c95] Running
	I0717 18:46:19.231921   80180 system_pods.go:61] "kube-apiserver-embed-certs-527415" [8b43654e-7127-4e43-91e6-1239bf66661d] Running
	I0717 18:46:19.231925   80180 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [55da9f4c-566b-4f82-a700-236d117bd9a4] Running
	I0717 18:46:19.231929   80180 system_pods.go:61] "kube-proxy-m52fq" [40f99883-b343-43b3-8f94-4b45b379a17b] Running
	I0717 18:46:19.231934   80180 system_pods.go:61] "kube-scheduler-embed-certs-527415" [e6031b0b-5aa6-4827-b41a-a422d05c0b9a] Running
	I0717 18:46:19.231942   80180 system_pods.go:61] "metrics-server-569cc877fc-hvxtg" [05a18f70-4284-4315-892e-2850ac8b5050] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:46:19.231947   80180 system_pods.go:61] "storage-provisioner" [5f473bbe-0727-4f25-ba39-4ed322767465] Running
	I0717 18:46:19.231957   80180 system_pods.go:74] duration metric: took 179.833729ms to wait for pod list to return data ...
	I0717 18:46:19.231966   80180 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:46:19.427972   80180 default_sa.go:45] found service account: "default"
	I0717 18:46:19.427994   80180 default_sa.go:55] duration metric: took 196.021611ms for default service account to be created ...
	I0717 18:46:19.428002   80180 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:46:19.630730   80180 system_pods.go:86] 9 kube-system pods found
	I0717 18:46:19.630755   80180 system_pods.go:89] "coredns-7db6d8ff4d-2zt8k" [5e2e90bb-5721-4ca8-8177-77e6b686175a] Running
	I0717 18:46:19.630760   80180 system_pods.go:89] "coredns-7db6d8ff4d-f64kh" [f0de6ef4-1402-44b2-81f3-3f234a72d151] Running
	I0717 18:46:19.630765   80180 system_pods.go:89] "etcd-embed-certs-527415" [79d210fe-c4d9-476f-ab78-cce3b98c1c95] Running
	I0717 18:46:19.630769   80180 system_pods.go:89] "kube-apiserver-embed-certs-527415" [8b43654e-7127-4e43-91e6-1239bf66661d] Running
	I0717 18:46:19.630774   80180 system_pods.go:89] "kube-controller-manager-embed-certs-527415" [55da9f4c-566b-4f82-a700-236d117bd9a4] Running
	I0717 18:46:19.630778   80180 system_pods.go:89] "kube-proxy-m52fq" [40f99883-b343-43b3-8f94-4b45b379a17b] Running
	I0717 18:46:19.630782   80180 system_pods.go:89] "kube-scheduler-embed-certs-527415" [e6031b0b-5aa6-4827-b41a-a422d05c0b9a] Running
	I0717 18:46:19.630788   80180 system_pods.go:89] "metrics-server-569cc877fc-hvxtg" [05a18f70-4284-4315-892e-2850ac8b5050] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:46:19.630792   80180 system_pods.go:89] "storage-provisioner" [5f473bbe-0727-4f25-ba39-4ed322767465] Running
	I0717 18:46:19.630800   80180 system_pods.go:126] duration metric: took 202.793522ms to wait for k8s-apps to be running ...
	I0717 18:46:19.630806   80180 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:46:19.630849   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:19.646111   80180 system_svc.go:56] duration metric: took 15.296964ms WaitForService to wait for kubelet
	I0717 18:46:19.646133   80180 kubeadm.go:582] duration metric: took 3.745647205s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:46:19.646149   80180 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:46:19.828333   80180 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:46:19.828356   80180 node_conditions.go:123] node cpu capacity is 2
	I0717 18:46:19.828368   80180 node_conditions.go:105] duration metric: took 182.213813ms to run NodePressure ...
	I0717 18:46:19.828381   80180 start.go:241] waiting for startup goroutines ...
	I0717 18:46:19.828389   80180 start.go:246] waiting for cluster config update ...
	I0717 18:46:19.828401   80180 start.go:255] writing updated cluster config ...
	I0717 18:46:19.828690   80180 ssh_runner.go:195] Run: rm -f paused
	I0717 18:46:19.877774   80180 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:46:19.879769   80180 out.go:177] * Done! kubectl is now configured to use "embed-certs-527415" cluster and "default" namespace by default
	I0717 18:46:33.124646   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:46:33.124790   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:46:33.126245   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.126307   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.126409   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.126547   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.126673   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:33.126734   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:33.128541   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:33.128626   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:33.128707   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:33.128817   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:33.128901   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:33.129018   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:33.129091   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:33.129172   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:33.129249   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:33.129339   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:33.129408   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:33.129444   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:33.129532   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:33.129603   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:33.129665   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:33.129765   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:33.129812   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:33.129929   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:33.130037   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:33.130093   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:33.130177   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:33.131546   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:33.131652   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:33.131750   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:33.131858   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:33.131939   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:33.132085   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:46:33.132133   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:46:33.132189   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132355   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132419   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132585   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132657   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132839   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132900   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133143   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133248   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133452   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133460   80857 kubeadm.go:310] 
	I0717 18:46:33.133494   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:46:33.133529   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:46:33.133535   80857 kubeadm.go:310] 
	I0717 18:46:33.133564   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:46:33.133599   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:46:33.133727   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:46:33.133752   80857 kubeadm.go:310] 
	I0717 18:46:33.133905   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:46:33.133947   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:46:33.134002   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:46:33.134012   80857 kubeadm.go:310] 
	I0717 18:46:33.134116   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:46:33.134186   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:46:33.134193   80857 kubeadm.go:310] 
	I0717 18:46:33.134290   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:46:33.134367   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:46:33.134431   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:46:33.134491   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:46:33.134533   80857 kubeadm.go:310] 
	W0717 18:46:33.134615   80857 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 18:46:33.134669   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:46:33.590879   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:33.605393   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:46:33.614382   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:46:33.614405   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:46:33.614450   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:46:33.622849   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:46:33.622905   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:46:33.631852   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:46:33.640160   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:46:33.640211   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:46:33.648774   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.656740   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:46:33.656796   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.665799   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:46:33.674492   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:46:33.674547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:46:33.683627   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:46:33.746405   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.746472   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.881152   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.881297   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.881443   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:34.053199   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:34.055757   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:34.055843   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:34.055918   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:34.056030   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:34.056129   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:34.056232   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:34.056336   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:34.056431   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:34.056524   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:34.056656   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:34.056764   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:34.056824   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:34.056900   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:34.276456   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:34.491418   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:34.702265   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:34.874511   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:34.895484   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:34.896451   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:34.896536   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:35.040208   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:35.042291   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:35.042437   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:35.042565   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:35.044391   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:35.046206   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:35.050843   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:47:15.053070   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:47:15.053416   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:15.053586   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:20.053963   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:20.054207   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:30.054801   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:30.055011   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:50.055270   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:50.055465   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.053919   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:48:30.054133   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.054148   80857 kubeadm.go:310] 
	I0717 18:48:30.054231   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:48:30.054300   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:48:30.054326   80857 kubeadm.go:310] 
	I0717 18:48:30.054386   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:48:30.054443   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:48:30.054581   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:48:30.054593   80857 kubeadm.go:310] 
	I0717 18:48:30.054715   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:48:30.054761   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:48:30.054810   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:48:30.054818   80857 kubeadm.go:310] 
	I0717 18:48:30.054970   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:48:30.055069   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:48:30.055081   80857 kubeadm.go:310] 
	I0717 18:48:30.055236   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:48:30.055332   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:48:30.055396   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:48:30.055457   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:48:30.055483   80857 kubeadm.go:310] 
	I0717 18:48:30.056139   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:48:30.056246   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:48:30.056338   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:48:30.056413   80857 kubeadm.go:394] duration metric: took 8m2.908780359s to StartCluster
	I0717 18:48:30.056461   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:48:30.056524   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:48:30.102640   80857 cri.go:89] found id: ""
	I0717 18:48:30.102662   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.102669   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:48:30.102674   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:48:30.102724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:48:30.142516   80857 cri.go:89] found id: ""
	I0717 18:48:30.142548   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.142559   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:48:30.142567   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:48:30.142630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:48:30.178558   80857 cri.go:89] found id: ""
	I0717 18:48:30.178589   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.178598   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:48:30.178604   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:48:30.178677   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:48:30.211146   80857 cri.go:89] found id: ""
	I0717 18:48:30.211177   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.211186   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:48:30.211192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:48:30.211242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:48:30.244287   80857 cri.go:89] found id: ""
	I0717 18:48:30.244308   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.244314   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:48:30.244319   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:48:30.244364   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:48:30.274547   80857 cri.go:89] found id: ""
	I0717 18:48:30.274577   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.274587   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:48:30.274594   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:48:30.274660   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:48:30.306796   80857 cri.go:89] found id: ""
	I0717 18:48:30.306825   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.306835   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:48:30.306842   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:48:30.306903   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:48:30.341938   80857 cri.go:89] found id: ""
	I0717 18:48:30.341962   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.341972   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:48:30.341982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:48:30.341997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:48:30.407881   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:48:30.407925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:48:30.430885   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:48:30.430913   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:48:30.525366   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:48:30.525394   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:48:30.525408   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:48:30.639556   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:48:30.639588   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 18:48:30.677493   80857 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 18:48:30.677544   80857 out.go:239] * 
	W0717 18:48:30.677604   80857 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.677636   80857 out.go:239] * 
	W0717 18:48:30.678483   80857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:48:30.681792   80857 out.go:177] 
	W0717 18:48:30.682976   80857 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.683034   80857 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 18:48:30.683050   80857 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 18:48:30.684325   80857 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 18:55:21 embed-certs-527415 crio[724]: time="2024-07-17 18:55:21.984676819Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:8211fc3362773d22258f266fb6992dc3a1cd5e4c663ba81a4bff531da4f7a47b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1721241976136934461,StartedAt:1721241976209614027,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.30.2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m52fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40f99883-b343-43b3-8f94-4b45b379a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 8937d2b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/40f99883-b343-43b3-8f94-4b45b379a17b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/40f99883-b343-43b3-8f94-4b45b379a17b/containers/kube-proxy/e0632dec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var
/lib/kubelet/pods/40f99883-b343-43b3-8f94-4b45b379a17b/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/40f99883-b343-43b3-8f94-4b45b379a17b/volumes/kubernetes.io~projected/kube-api-access-j2fsr,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-m52fq_40f99883-b343-43b3-8f94-4b45b379a17b/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-
collector/interceptors.go:74" id=b7690aa6-dcc3-4604-8751-0e5bb1f4f127 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 18:55:21 embed-certs-527415 crio[724]: time="2024-07-17 18:55:21.985082669Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:f11369f730c593193e0a51ab3b1884ff6e0c4427208f7684a4848d89cbca3f6f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=1fc80ac2-4c7a-4e4a-8f7f-02e1f58ea6eb name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 18:55:21 embed-certs-527415 crio[724]: time="2024-07-17 18:55:21.985195122Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:f11369f730c593193e0a51ab3b1884ff6e0c4427208f7684a4848d89cbca3f6f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1721241956752647623,StartedAt:1721241956862351162,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4839e942333313189ee9d179d15c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f8de2d1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/bc4839e942333313189ee9d179d15c6d/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/bc4839e942333313189ee9d179d15c6d/containers/etcd/9f549fba,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etc
d-embed-certs-527415_bc4839e942333313189ee9d179d15c6d/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1fc80ac2-4c7a-4e4a-8f7f-02e1f58ea6eb name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 18:55:21 embed-certs-527415 crio[724]: time="2024-07-17 18:55:21.985626532Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:55b10cf1d32d7f3c017da8f0dbe36f599bdb5ff6b6311bb8990129bcf1cec6dd,Verbose:false,}" file="otel-collector/interceptors.go:62" id=19508889-2261-4833-b8e4-03c660f4d16a name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 18:55:21 embed-certs-527415 crio[724]: time="2024-07-17 18:55:21.985728226Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:55b10cf1d32d7f3c017da8f0dbe36f599bdb5ff6b6311bb8990129bcf1cec6dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1721241956741437723,StartedAt:1721241956837223718,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.30.2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76b685168398b77d009e8c3f7a3fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b76b685168398b77d009e8c3f7a3fe87/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b76b685168398b77d009e8c3f7a3fe87/containers/kube-scheduler/4b2dcfd1,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-embed-certs-527415_b76b685168398b77d009e8c3f7a3fe87/kube-scheduler/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{Cpu
Period:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=19508889-2261-4833-b8e4-03c660f4d16a name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 18:55:21 embed-certs-527415 crio[724]: time="2024-07-17 18:55:21.986188863Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e449290763e32cf4c1846fddcb73f1114ef0063c64231695d6e179e78ee4df22,Verbose:false,}" file="otel-collector/interceptors.go:62" id=82e1e0ba-9891-43d6-a6b0-c1e438d3bcdf name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 18:55:21 embed-certs-527415 crio[724]: time="2024-07-17 18:55:21.986320409Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e449290763e32cf4c1846fddcb73f1114ef0063c64231695d6e179e78ee4df22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1721241956690782799,StartedAt:1721241956779652507,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b289dab6de17ab6177769769972038a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b289dab6de17ab6177769769972038a4/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b289dab6de17ab6177769769972038a4/containers/kube-controller-manager/09727c26,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVA
TE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-embed-certs-527415_b289dab6de17ab6177769769972038a4/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,Cpus
etMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=82e1e0ba-9891-43d6-a6b0-c1e438d3bcdf name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 18:55:21 embed-certs-527415 crio[724]: time="2024-07-17 18:55:21.986878616Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:1d41b06ad4b2ab53745589288ead395180d7211c2722000c8cd8a00c52ea336a,Verbose:false,}" file="otel-collector/interceptors.go:62" id=16f8134c-f709-46f1-bfa9-48cf67254388 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 18:55:21 embed-certs-527415 crio[724]: time="2024-07-17 18:55:21.986979694Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:1d41b06ad4b2ab53745589288ead395180d7211c2722000c8cd8a00c52ea336a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1721241956610187077,StartedAt:1721241956709918487,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49c409df9a954b7691247df1c8d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 24f8465e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d49c409df9a954b7691247df1c8d9f62/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d49c409df9a954b7691247df1c8d9f62/containers/kube-apiserver/896da73f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Conta
inerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-embed-certs-527415_d49c409df9a954b7691247df1c8d9f62/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=16f8134c-f709-46f1-bfa9-48cf67254388 name=/runtime.v1.RuntimeService/ContainerStatus
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.001977261Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b8e58e9-ab0a-439c-b0d5-4974ea9aa9d8 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.002060876Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b8e58e9-ab0a-439c-b0d5-4974ea9aa9d8 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.003239270Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=076e36be-e345-4b0f-b806-456bf074c955 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.003603502Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242522003586087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=076e36be-e345-4b0f-b806-456bf074c955 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.004278753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbeadece-2cd2-47ca-8e8a-b6d72d8a7509 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.004326988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbeadece-2cd2-47ca-8e8a-b6d72d8a7509 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.004530792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084cde7459c3484dc827fb95bcba3e12f9f645203aaf24df4adca837533190a1,PodSandboxId:743b7d635cfdb1f479dcbe06e739415f139acab6b4527d1bac8eb85bcc144aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241977665371070,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f473bbe-0727-4f25-ba39-4ed322767465,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9339a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:235f37418508acb48fcc568777f192c2d4ff408bb07c34e60ff528dea9b3d667,PodSandboxId:2d69fcb8f2d1d58f23a79b7f3659cd09bebcdb6921c894f9a1b0e97ad7d5bccd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241977022783713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f64kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0de6ef4-1402-44b2-81f3-3f234a72d151,},Annotations:map[string]string{io.kubernetes.container.hash: 3015fca6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a6b37c689d268a287e557e56dbd0f797e5b2a3730aa6ebd8af87140cc7730a,PodSandboxId:aa9485aab31cf0542a265efeef3a4cc43ef650a004ed8acd7bf72b539cba793c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241976797489685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2zt8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
2e90bb-5721-4ca8-8177-77e6b686175a,},Annotations:map[string]string{io.kubernetes.container.hash: 281e4adb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8211fc3362773d22258f266fb6992dc3a1cd5e4c663ba81a4bff531da4f7a47b,PodSandboxId:ff640218fc03e161303717a0241a423a64dcbadce452bcff096c2b57aed7283c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1721241976049738936,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m52fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40f99883-b343-43b3-8f94-4b45b379a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 8937d2b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11369f730c593193e0a51ab3b1884ff6e0c4427208f7684a4848d89cbca3f6f,PodSandboxId:5d69b061b22086c6bddfd20a559c7fad2550ac962a582276b8ce7bb41c7e5376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241956671166428,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4839e942333313189ee9d179d15c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f8de2d1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b10cf1d32d7f3c017da8f0dbe36f599bdb5ff6b6311bb8990129bcf1cec6dd,PodSandboxId:d407d3c1ec4c739b654836a85a28e210df7b8a51d487b1b5b38ae32abb07b809,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241956644669760,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76b685168398b77d009e8c3f7a3fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449290763e32cf4c1846fddcb73f1114ef0063c64231695d6e179e78ee4df22,PodSandboxId:417d02f26b483ae5dfd01e5b0408303e45e35d0525ad82700a1fe65c52de8f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241956614625350,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b289dab6de17ab6177769769972038a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d41b06ad4b2ab53745589288ead395180d7211c2722000c8cd8a00c52ea336a,PodSandboxId:92ac90338726334044e2ca283e436c9a37604b2fcd2671112fbfaecbd3632fb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241956550359305,Labels:map[string]string{io.kubernetes.container.
name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49c409df9a954b7691247df1c8d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 24f8465e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b10dddaf722511ea0efce15f066ecda5c95b478b728ec1ae9bd372d21694007,PodSandboxId:ad2e057203c1d3a5178ca18241e263f8e713c572996edf3324f709e7a51a81f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241666192293722,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49c409df9a954b7691247df1c8d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 24f8465e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbeadece-2cd2-47ca-8e8a-b6d72d8a7509 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.016861451Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=75796427-a37d-4ad4-b5de-a716c950bc6f name=/runtime.v1.RuntimeService/Status
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.017207297Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=75796427-a37d-4ad4-b5de-a716c950bc6f name=/runtime.v1.RuntimeService/Status
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.037634717Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6bec2ff5-0527-4e70-80a8-0bf7aa8dbc0c name=/runtime.v1.RuntimeService/Version
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.037715710Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6bec2ff5-0527-4e70-80a8-0bf7aa8dbc0c name=/runtime.v1.RuntimeService/Version
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.040119869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de613132-b6b1-4d13-a215-757550b73854 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.040570834Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242522040544334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de613132-b6b1-4d13-a215-757550b73854 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.041229757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5dafcc30-4f18-4656-9d02-d5ba7680fbd6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.041283977Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5dafcc30-4f18-4656-9d02-d5ba7680fbd6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:55:22 embed-certs-527415 crio[724]: time="2024-07-17 18:55:22.041533952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084cde7459c3484dc827fb95bcba3e12f9f645203aaf24df4adca837533190a1,PodSandboxId:743b7d635cfdb1f479dcbe06e739415f139acab6b4527d1bac8eb85bcc144aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241977665371070,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f473bbe-0727-4f25-ba39-4ed322767465,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9339a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:235f37418508acb48fcc568777f192c2d4ff408bb07c34e60ff528dea9b3d667,PodSandboxId:2d69fcb8f2d1d58f23a79b7f3659cd09bebcdb6921c894f9a1b0e97ad7d5bccd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241977022783713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f64kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0de6ef4-1402-44b2-81f3-3f234a72d151,},Annotations:map[string]string{io.kubernetes.container.hash: 3015fca6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a6b37c689d268a287e557e56dbd0f797e5b2a3730aa6ebd8af87140cc7730a,PodSandboxId:aa9485aab31cf0542a265efeef3a4cc43ef650a004ed8acd7bf72b539cba793c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241976797489685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2zt8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
2e90bb-5721-4ca8-8177-77e6b686175a,},Annotations:map[string]string{io.kubernetes.container.hash: 281e4adb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8211fc3362773d22258f266fb6992dc3a1cd5e4c663ba81a4bff531da4f7a47b,PodSandboxId:ff640218fc03e161303717a0241a423a64dcbadce452bcff096c2b57aed7283c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1721241976049738936,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m52fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40f99883-b343-43b3-8f94-4b45b379a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 8937d2b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11369f730c593193e0a51ab3b1884ff6e0c4427208f7684a4848d89cbca3f6f,PodSandboxId:5d69b061b22086c6bddfd20a559c7fad2550ac962a582276b8ce7bb41c7e5376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241956671166428,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4839e942333313189ee9d179d15c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f8de2d1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b10cf1d32d7f3c017da8f0dbe36f599bdb5ff6b6311bb8990129bcf1cec6dd,PodSandboxId:d407d3c1ec4c739b654836a85a28e210df7b8a51d487b1b5b38ae32abb07b809,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241956644669760,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76b685168398b77d009e8c3f7a3fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449290763e32cf4c1846fddcb73f1114ef0063c64231695d6e179e78ee4df22,PodSandboxId:417d02f26b483ae5dfd01e5b0408303e45e35d0525ad82700a1fe65c52de8f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241956614625350,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b289dab6de17ab6177769769972038a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d41b06ad4b2ab53745589288ead395180d7211c2722000c8cd8a00c52ea336a,PodSandboxId:92ac90338726334044e2ca283e436c9a37604b2fcd2671112fbfaecbd3632fb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241956550359305,Labels:map[string]string{io.kubernetes.container.
name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49c409df9a954b7691247df1c8d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 24f8465e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b10dddaf722511ea0efce15f066ecda5c95b478b728ec1ae9bd372d21694007,PodSandboxId:ad2e057203c1d3a5178ca18241e263f8e713c572996edf3324f709e7a51a81f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241666192293722,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49c409df9a954b7691247df1c8d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 24f8465e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5dafcc30-4f18-4656-9d02-d5ba7680fbd6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	084cde7459c34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   743b7d635cfdb       storage-provisioner
	235f37418508a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   2d69fcb8f2d1d       coredns-7db6d8ff4d-f64kh
	38a6b37c689d2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   aa9485aab31cf       coredns-7db6d8ff4d-2zt8k
	8211fc3362773       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   9 minutes ago       Running             kube-proxy                0                   ff640218fc03e       kube-proxy-m52fq
	f11369f730c59       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   5d69b061b2208       etcd-embed-certs-527415
	55b10cf1d32d7       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   9 minutes ago       Running             kube-scheduler            2                   d407d3c1ec4c7       kube-scheduler-embed-certs-527415
	e449290763e32       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   9 minutes ago       Running             kube-controller-manager   2                   417d02f26b483       kube-controller-manager-embed-certs-527415
	1d41b06ad4b2a       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   9 minutes ago       Running             kube-apiserver            2                   92ac903387263       kube-apiserver-embed-certs-527415
	2b10dddaf7225       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   14 minutes ago      Exited              kube-apiserver            1                   ad2e057203c1d       kube-apiserver-embed-certs-527415
	
	
	==> coredns [235f37418508acb48fcc568777f192c2d4ff408bb07c34e60ff528dea9b3d667] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [38a6b37c689d268a287e557e56dbd0f797e5b2a3730aa6ebd8af87140cc7730a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-527415
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-527415
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=embed-certs-527415
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_46_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:45:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-527415
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 18:55:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:51:28 +0000   Wed, 17 Jul 2024 18:45:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:51:28 +0000   Wed, 17 Jul 2024 18:45:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:51:28 +0000   Wed, 17 Jul 2024 18:45:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:51:28 +0000   Wed, 17 Jul 2024 18:45:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.90
	  Hostname:    embed-certs-527415
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 87ee6cbc85374f4bbc0c06e2cbb3cc08
	  System UUID:                87ee6cbc-8537-4f4b-bc0c-06e2cbb3cc08
	  Boot ID:                    2c1ee72b-5496-4ad4-827f-43db07eaa370
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2zt8k                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-f64kh                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-527415                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-embed-certs-527415             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-embed-certs-527415    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-m52fq                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-527415             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-hvxtg               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m5s   kube-proxy       
	  Normal  Starting                 9m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s  kubelet          Node embed-certs-527415 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s  kubelet          Node embed-certs-527415 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s  kubelet          Node embed-certs-527415 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s   node-controller  Node embed-certs-527415 event: Registered Node embed-certs-527415 in Controller
	
	
	==> dmesg <==
	[  +0.039395] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.618263] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.823078] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.562158] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.990290] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.060111] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068930] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.183058] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.144409] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.266915] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[Jul17 18:41] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +1.889242] systemd-fstab-generator[927]: Ignoring "noauto" option for root device
	[  +0.062997] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.507151] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.274568] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.886694] kauditd_printk_skb: 27 callbacks suppressed
	[Jul17 18:45] systemd-fstab-generator[3574]: Ignoring "noauto" option for root device
	[  +0.070249] kauditd_printk_skb: 9 callbacks suppressed
	[Jul17 18:46] systemd-fstab-generator[3897]: Ignoring "noauto" option for root device
	[  +0.081317] kauditd_printk_skb: 54 callbacks suppressed
	[ +14.289528] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.026696] systemd-fstab-generator[4118]: Ignoring "noauto" option for root device
	[Jul17 18:47] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [f11369f730c593193e0a51ab3b1884ff6e0c4427208f7684a4848d89cbca3f6f] <==
	{"level":"info","ts":"2024-07-17T18:45:57.012563Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-17T18:45:57.01286Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"70b1d9345947c0fd","initial-advertise-peer-urls":["https://192.168.61.90:2380"],"listen-peer-urls":["https://192.168.61.90:2380"],"advertise-client-urls":["https://192.168.61.90:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.90:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T18:45:57.012895Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T18:45:57.012985Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.90:2380"}
	{"level":"info","ts":"2024-07-17T18:45:57.013009Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.90:2380"}
	{"level":"info","ts":"2024-07-17T18:45:57.033015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70b1d9345947c0fd switched to configuration voters=(8120510421985116413)"}
	{"level":"info","ts":"2024-07-17T18:45:57.033141Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b1c18dc80f06de23","local-member-id":"70b1d9345947c0fd","added-peer-id":"70b1d9345947c0fd","added-peer-peer-urls":["https://192.168.61.90:2380"]}
	{"level":"info","ts":"2024-07-17T18:45:57.159887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70b1d9345947c0fd is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-17T18:45:57.15994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70b1d9345947c0fd became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T18:45:57.159969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70b1d9345947c0fd received MsgPreVoteResp from 70b1d9345947c0fd at term 1"}
	{"level":"info","ts":"2024-07-17T18:45:57.159981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70b1d9345947c0fd became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:57.159987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70b1d9345947c0fd received MsgVoteResp from 70b1d9345947c0fd at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:57.159995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"70b1d9345947c0fd became leader at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:57.160002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 70b1d9345947c0fd elected leader 70b1d9345947c0fd at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:57.162079Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:57.164099Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"70b1d9345947c0fd","local-member-attributes":"{Name:embed-certs-527415 ClientURLs:[https://192.168.61.90:2379]}","request-path":"/0/members/70b1d9345947c0fd/attributes","cluster-id":"b1c18dc80f06de23","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:45:57.165374Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:45:57.165481Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b1c18dc80f06de23","local-member-id":"70b1d9345947c0fd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:57.165612Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:57.167835Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:57.16573Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:45:57.169584Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T18:45:57.17388Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:45:57.17391Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T18:45:57.179436Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.90:2379"}
	
	
	==> kernel <==
	 18:55:22 up 14 min,  0 users,  load average: 0.10, 0.26, 0.17
	Linux embed-certs-527415 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1d41b06ad4b2ab53745589288ead395180d7211c2722000c8cd8a00c52ea336a] <==
	I0717 18:49:18.223633       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:50:59.395216       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:50:59.395330       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 18:51:00.396342       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:51:00.396462       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 18:51:00.396487       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:51:00.396525       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:51:00.396625       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 18:51:00.397943       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:52:00.397682       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:52:00.397841       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 18:52:00.397856       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:52:00.399056       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:52:00.399097       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 18:52:00.399105       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:54:00.398677       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:54:00.398792       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 18:54:00.398836       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:54:00.399708       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:54:00.399739       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 18:54:00.400895       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [2b10dddaf722511ea0efce15f066ecda5c95b478b728ec1ae9bd372d21694007] <==
	W0717 18:45:52.356616       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.373072       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.397268       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.465792       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.477978       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.490414       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.499468       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.512674       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.572650       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.576011       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.674197       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.682177       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.711295       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.868716       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.983075       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.029255       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.074631       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.167116       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.171576       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.293438       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.352096       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.387176       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.469503       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.483094       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.497871       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [e449290763e32cf4c1846fddcb73f1114ef0063c64231695d6e179e78ee4df22] <==
	I0717 18:49:46.173679       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:50:15.728567       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:50:16.181371       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:50:45.734309       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:50:46.189223       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:51:15.740079       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:51:16.196600       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:51:45.744957       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:51:46.204775       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 18:52:11.898125       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="455.637µs"
	E0717 18:52:15.751763       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:52:16.212867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 18:52:23.895570       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="111.704µs"
	E0717 18:52:45.756612       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:52:46.221067       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:53:15.763596       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:53:16.227606       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:53:45.768991       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:53:46.235506       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:54:15.779879       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:54:16.244006       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:54:45.785739       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:54:46.253174       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:55:15.790670       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:55:16.261785       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8211fc3362773d22258f266fb6992dc3a1cd5e4c663ba81a4bff531da4f7a47b] <==
	I0717 18:46:16.320393       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:46:16.345033       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.90"]
	I0717 18:46:16.412948       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:46:16.412997       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:46:16.413014       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:46:16.416300       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:46:16.416517       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:46:16.416542       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:46:16.418236       1 config.go:192] "Starting service config controller"
	I0717 18:46:16.418265       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:46:16.418302       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:46:16.418307       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:46:16.422744       1 config.go:319] "Starting node config controller"
	I0717 18:46:16.422756       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:46:16.518874       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:46:16.518913       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:46:16.522982       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [55b10cf1d32d7f3c017da8f0dbe36f599bdb5ff6b6311bb8990129bcf1cec6dd] <==
	W0717 18:45:59.439651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:45:59.439689       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 18:45:59.439591       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:45:59.439709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 18:45:59.439470       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:45:59.439723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 18:45:59.439860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:45:59.439939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:46:00.262348       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:46:00.262473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:46:00.294856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:46:00.294962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:46:00.360997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:46:00.361558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 18:46:00.395931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:46:00.395988       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:46:00.419385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:46:00.419685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:46:00.433792       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:46:00.433862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:46:00.454389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:46:00.454468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 18:46:00.593281       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:46:00.593364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0717 18:46:01.030632       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 18:53:01 embed-certs-527415 kubelet[3904]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:53:01 embed-certs-527415 kubelet[3904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:53:01 embed-certs-527415 kubelet[3904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:53:06 embed-certs-527415 kubelet[3904]: E0717 18:53:06.878121    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:53:18 embed-certs-527415 kubelet[3904]: E0717 18:53:18.877692    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:53:29 embed-certs-527415 kubelet[3904]: E0717 18:53:29.878941    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:53:40 embed-certs-527415 kubelet[3904]: E0717 18:53:40.878448    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:53:51 embed-certs-527415 kubelet[3904]: E0717 18:53:51.879001    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:54:01 embed-certs-527415 kubelet[3904]: E0717 18:54:01.906041    3904 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:54:01 embed-certs-527415 kubelet[3904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:54:01 embed-certs-527415 kubelet[3904]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:54:01 embed-certs-527415 kubelet[3904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:54:01 embed-certs-527415 kubelet[3904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:54:04 embed-certs-527415 kubelet[3904]: E0717 18:54:04.878575    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:54:15 embed-certs-527415 kubelet[3904]: E0717 18:54:15.878480    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:54:28 embed-certs-527415 kubelet[3904]: E0717 18:54:28.878219    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:54:39 embed-certs-527415 kubelet[3904]: E0717 18:54:39.878616    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:54:51 embed-certs-527415 kubelet[3904]: E0717 18:54:51.878663    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:55:01 embed-certs-527415 kubelet[3904]: E0717 18:55:01.909208    3904 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:55:01 embed-certs-527415 kubelet[3904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:55:01 embed-certs-527415 kubelet[3904]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:55:01 embed-certs-527415 kubelet[3904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:55:01 embed-certs-527415 kubelet[3904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:55:04 embed-certs-527415 kubelet[3904]: E0717 18:55:04.879024    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:55:17 embed-certs-527415 kubelet[3904]: E0717 18:55:17.877954    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	
	
	==> storage-provisioner [084cde7459c3484dc827fb95bcba3e12f9f645203aaf24df4adca837533190a1] <==
	I0717 18:46:17.767514       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:46:17.775947       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:46:17.776081       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:46:17.786748       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:46:17.786938       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-527415_4de59122-21e5-46d9-ba94-4da6fa5d9bed!
	I0717 18:46:17.796848       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a8197d73-dd9b-4f7a-a828-578a98fc0b06", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-527415_4de59122-21e5-46d9-ba94-4da6fa5d9bed became leader
	I0717 18:46:17.887494       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-527415_4de59122-21e5-46d9-ba94-4da6fa5d9bed!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-527415 -n embed-certs-527415
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-527415 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-hvxtg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-527415 describe pod metrics-server-569cc877fc-hvxtg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-527415 describe pod metrics-server-569cc877fc-hvxtg: exit status 1 (67.302131ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-hvxtg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-527415 describe pod metrics-server-569cc877fc-hvxtg: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:48:44.838867   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:49:22.808923   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:49:26.523220   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:50:12.088971   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:50:19.280013   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:50:41.791308   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:50:49.568891   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:50:58.171655   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:51:07.656352   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:51:35.134527   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:51:42.323596   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:52:09.805611   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:52:21.216364   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:52:59.763005   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:53:21.395850   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:54:26.523189   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:55:12.089286   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:55:19.279021   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:55:41.790953   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:55:58.171896   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:56:07.656907   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:56:24.444533   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:57:09.805046   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019549 -n old-k8s-version-019549
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019549 -n old-k8s-version-019549: exit status 2 (220.202189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-019549" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549: exit status 2 (214.842019ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-019549 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-019549 logs -n 25: (1.510249413s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-527415            | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-371172                                        | pause-371172                 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-341716 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | disable-driver-mounts-341716                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:34 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-066175             | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC | 17 Jul 24 18:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-066175                                   | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-022930  | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC | 17 Jul 24 18:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-527415                 | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-019549        | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-066175                  | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-066175 --memory=2200                     | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:45 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-019549             | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-022930       | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC | 17 Jul 24 18:45 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:37:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:37:14.473404   81068 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:37:14.473526   81068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:14.473535   81068 out.go:304] Setting ErrFile to fd 2...
	I0717 18:37:14.473540   81068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:14.473714   81068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:37:14.474251   81068 out.go:298] Setting JSON to false
	I0717 18:37:14.475115   81068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8377,"bootTime":1721233057,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:37:14.475172   81068 start.go:139] virtualization: kvm guest
	I0717 18:37:14.477356   81068 out.go:177] * [default-k8s-diff-port-022930] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:37:14.478600   81068 notify.go:220] Checking for updates...
	I0717 18:37:14.478615   81068 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:37:14.480094   81068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:37:14.481516   81068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:37:14.482886   81068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:37:14.484159   81068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:37:14.485449   81068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:37:14.487164   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:37:14.487744   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:14.487795   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:14.502368   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0717 18:37:14.502712   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:14.503192   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:37:14.503213   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:14.503574   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:14.503778   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:37:14.504032   81068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:37:14.504326   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:14.504381   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:14.518330   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0717 18:37:14.518718   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:14.519095   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:37:14.519114   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:14.519409   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:14.519578   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:37:14.549923   81068 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:37:14.551160   81068 start.go:297] selected driver: kvm2
	I0717 18:37:14.551175   81068 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:37:14.551302   81068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:37:14.551931   81068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:37:14.552008   81068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:37:14.566038   81068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:37:14.566371   81068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:37:14.566443   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:37:14.566466   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:37:14.566516   81068 start.go:340] cluster config:
	{Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:37:14.566643   81068 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:37:14.568602   81068 out.go:177] * Starting "default-k8s-diff-port-022930" primary control-plane node in "default-k8s-diff-port-022930" cluster
	I0717 18:37:13.057187   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:16.129274   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:14.569868   81068 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:37:14.569908   81068 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:37:14.569919   81068 cache.go:56] Caching tarball of preloaded images
	I0717 18:37:14.569992   81068 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:37:14.570003   81068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:37:14.570100   81068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/config.json ...
	I0717 18:37:14.570277   81068 start.go:360] acquireMachinesLock for default-k8s-diff-port-022930: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:37:22.209207   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:25.281226   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:31.361221   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:34.433258   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:40.513234   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:43.585225   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:49.665198   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:52.737256   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:58.817201   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:01.889213   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:07.969247   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:11.041264   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:17.121227   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:20.193250   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:26.273206   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:29.345193   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:35.425259   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:38.497261   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:44.577185   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:47.649306   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:53.729234   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:56.801257   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:02.881239   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:05.953258   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:12.033251   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:15.105230   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:21.185200   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:24.257195   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:30.337181   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:33.409224   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:39.489219   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:42.561250   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:45.565739   80401 start.go:364] duration metric: took 4m11.345351864s to acquireMachinesLock for "no-preload-066175"
	I0717 18:39:45.565801   80401 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:39:45.565807   80401 fix.go:54] fixHost starting: 
	I0717 18:39:45.566167   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:39:45.566198   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:39:45.580996   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45665
	I0717 18:39:45.581389   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:39:45.581797   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:39:45.581817   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:39:45.582145   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:39:45.582323   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:39:45.582467   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:39:45.584074   80401 fix.go:112] recreateIfNeeded on no-preload-066175: state=Stopped err=<nil>
	I0717 18:39:45.584109   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	W0717 18:39:45.584260   80401 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:39:45.586842   80401 out.go:177] * Restarting existing kvm2 VM for "no-preload-066175" ...
	I0717 18:39:45.563046   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:39:45.563105   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:39:45.563521   80180 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:39:45.563555   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:39:45.563758   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:39:45.565594   80180 machine.go:97] duration metric: took 4m37.427146226s to provisionDockerMachine
	I0717 18:39:45.565643   80180 fix.go:56] duration metric: took 4m37.448013968s for fixHost
	I0717 18:39:45.565651   80180 start.go:83] releasing machines lock for "embed-certs-527415", held for 4m37.448033785s
	W0717 18:39:45.565675   80180 start.go:714] error starting host: provision: host is not running
	W0717 18:39:45.565775   80180 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 18:39:45.565784   80180 start.go:729] Will try again in 5 seconds ...
	I0717 18:39:45.587901   80401 main.go:141] libmachine: (no-preload-066175) Calling .Start
	I0717 18:39:45.588046   80401 main.go:141] libmachine: (no-preload-066175) Ensuring networks are active...
	I0717 18:39:45.588666   80401 main.go:141] libmachine: (no-preload-066175) Ensuring network default is active
	I0717 18:39:45.589012   80401 main.go:141] libmachine: (no-preload-066175) Ensuring network mk-no-preload-066175 is active
	I0717 18:39:45.589386   80401 main.go:141] libmachine: (no-preload-066175) Getting domain xml...
	I0717 18:39:45.589959   80401 main.go:141] libmachine: (no-preload-066175) Creating domain...
	I0717 18:39:46.785717   80401 main.go:141] libmachine: (no-preload-066175) Waiting to get IP...
	I0717 18:39:46.786495   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:46.786912   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:46.786974   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:46.786888   81612 retry.go:31] will retry after 301.458026ms: waiting for machine to come up
	I0717 18:39:47.090556   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.091129   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.091154   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.091098   81612 retry.go:31] will retry after 347.107185ms: waiting for machine to come up
	I0717 18:39:47.439530   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.440010   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.440033   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.439947   81612 retry.go:31] will retry after 436.981893ms: waiting for machine to come up
	I0717 18:39:47.878684   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.879091   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.879120   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.879051   81612 retry.go:31] will retry after 582.942833ms: waiting for machine to come up
	I0717 18:39:48.464068   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:48.464568   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:48.464593   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:48.464513   81612 retry.go:31] will retry after 633.101908ms: waiting for machine to come up
	I0717 18:39:49.099383   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:49.099762   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:49.099784   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:49.099720   81612 retry.go:31] will retry after 847.181679ms: waiting for machine to come up
	I0717 18:39:50.567294   80180 start.go:360] acquireMachinesLock for embed-certs-527415: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:39:49.948696   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:49.949228   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:49.949260   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:49.949188   81612 retry.go:31] will retry after 1.048891217s: waiting for machine to come up
	I0717 18:39:50.999658   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:51.000062   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:51.000099   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:51.000001   81612 retry.go:31] will retry after 942.285454ms: waiting for machine to come up
	I0717 18:39:51.944171   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:51.944676   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:51.944702   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:51.944632   81612 retry.go:31] will retry after 1.21768861s: waiting for machine to come up
	I0717 18:39:53.163883   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:53.164345   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:53.164368   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:53.164305   81612 retry.go:31] will retry after 1.505905193s: waiting for machine to come up
	I0717 18:39:54.671532   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:54.671951   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:54.671977   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:54.671918   81612 retry.go:31] will retry after 2.885547597s: waiting for machine to come up
	I0717 18:39:57.560375   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:57.560878   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:57.560902   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:57.560830   81612 retry.go:31] will retry after 3.53251124s: waiting for machine to come up
	I0717 18:40:02.249487   80857 start.go:364] duration metric: took 3m17.095542929s to acquireMachinesLock for "old-k8s-version-019549"
	I0717 18:40:02.249548   80857 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:02.249556   80857 fix.go:54] fixHost starting: 
	I0717 18:40:02.249946   80857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:02.249976   80857 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:02.269365   80857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0717 18:40:02.269715   80857 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:02.270182   80857 main.go:141] libmachine: Using API Version  1
	I0717 18:40:02.270205   80857 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:02.270534   80857 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:02.270738   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:02.270875   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetState
	I0717 18:40:02.272408   80857 fix.go:112] recreateIfNeeded on old-k8s-version-019549: state=Stopped err=<nil>
	I0717 18:40:02.272443   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	W0717 18:40:02.272597   80857 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:02.274702   80857 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-019549" ...
	I0717 18:40:01.094975   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.095556   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has current primary IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.095579   80401 main.go:141] libmachine: (no-preload-066175) Found IP for machine: 192.168.72.216
	I0717 18:40:01.095592   80401 main.go:141] libmachine: (no-preload-066175) Reserving static IP address...
	I0717 18:40:01.095955   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.095980   80401 main.go:141] libmachine: (no-preload-066175) DBG | skip adding static IP to network mk-no-preload-066175 - found existing host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"}
	I0717 18:40:01.095989   80401 main.go:141] libmachine: (no-preload-066175) Reserved static IP address: 192.168.72.216
	I0717 18:40:01.096000   80401 main.go:141] libmachine: (no-preload-066175) Waiting for SSH to be available...
	I0717 18:40:01.096010   80401 main.go:141] libmachine: (no-preload-066175) DBG | Getting to WaitForSSH function...
	I0717 18:40:01.098163   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.098498   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.098521   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.098631   80401 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH client type: external
	I0717 18:40:01.098657   80401 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa (-rw-------)
	I0717 18:40:01.098692   80401 main.go:141] libmachine: (no-preload-066175) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:01.098707   80401 main.go:141] libmachine: (no-preload-066175) DBG | About to run SSH command:
	I0717 18:40:01.098720   80401 main.go:141] libmachine: (no-preload-066175) DBG | exit 0
	I0717 18:40:01.216740   80401 main.go:141] libmachine: (no-preload-066175) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:01.217099   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetConfigRaw
	I0717 18:40:01.217706   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:01.220160   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.220461   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.220492   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.220656   80401 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/config.json ...
	I0717 18:40:01.220843   80401 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:01.220860   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:01.221067   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.223044   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.223347   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.223371   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.223531   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.223719   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.223864   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.223980   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.224125   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.224332   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.224345   80401 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:01.321053   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:01.321083   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.321333   80401 buildroot.go:166] provisioning hostname "no-preload-066175"
	I0717 18:40:01.321359   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.321529   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.323945   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.324269   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.324297   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.324421   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.324582   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.324724   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.324837   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.324996   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.325162   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.325175   80401 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-066175 && echo "no-preload-066175" | sudo tee /etc/hostname
	I0717 18:40:01.435003   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-066175
	
	I0717 18:40:01.435033   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.437795   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.438113   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.438155   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.438344   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.438533   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.438692   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.438803   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.438948   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.439094   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.439108   80401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-066175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-066175/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-066175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:01.540598   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:01.540631   80401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:01.540650   80401 buildroot.go:174] setting up certificates
	I0717 18:40:01.540660   80401 provision.go:84] configureAuth start
	I0717 18:40:01.540669   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.540977   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:01.543503   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.543788   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.543817   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.543907   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.545954   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.546261   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.546280   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.546415   80401 provision.go:143] copyHostCerts
	I0717 18:40:01.546483   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:01.546498   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:01.546596   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:01.546730   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:01.546743   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:01.546788   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:01.546878   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:01.546888   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:01.546921   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:01.547054   80401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.no-preload-066175 san=[127.0.0.1 192.168.72.216 localhost minikube no-preload-066175]
	I0717 18:40:01.628522   80401 provision.go:177] copyRemoteCerts
	I0717 18:40:01.628574   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:01.628596   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.631306   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.631714   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.631761   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.631876   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.632050   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.632210   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.632330   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:01.711344   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:01.738565   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 18:40:01.765888   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:40:01.790852   80401 provision.go:87] duration metric: took 250.181586ms to configureAuth
	I0717 18:40:01.790874   80401 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:01.791046   80401 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:40:01.791111   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.793530   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.793922   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.793945   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.794095   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.794323   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.794497   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.794635   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.794786   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.794955   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.794969   80401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:02.032506   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:02.032543   80401 machine.go:97] duration metric: took 811.687511ms to provisionDockerMachine
	I0717 18:40:02.032554   80401 start.go:293] postStartSetup for "no-preload-066175" (driver="kvm2")
	I0717 18:40:02.032567   80401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:02.032596   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.032921   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:02.032966   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.035429   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.035731   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.035767   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.035921   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.036081   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.036351   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.036493   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.114601   80401 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:02.118230   80401 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:02.118247   80401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:02.118308   80401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:02.118384   80401 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:02.118592   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:02.126753   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:02.148028   80401 start.go:296] duration metric: took 115.461293ms for postStartSetup
	I0717 18:40:02.148066   80401 fix.go:56] duration metric: took 16.582258787s for fixHost
	I0717 18:40:02.148084   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.150550   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.150917   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.150949   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.151061   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.151242   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.151394   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.151513   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.151658   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:02.151828   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:02.151841   80401 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:02.249303   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241602.223072082
	
	I0717 18:40:02.249334   80401 fix.go:216] guest clock: 1721241602.223072082
	I0717 18:40:02.249344   80401 fix.go:229] Guest: 2024-07-17 18:40:02.223072082 +0000 UTC Remote: 2024-07-17 18:40:02.14806999 +0000 UTC m=+268.060359078 (delta=75.002092ms)
	I0717 18:40:02.249388   80401 fix.go:200] guest clock delta is within tolerance: 75.002092ms
	I0717 18:40:02.249396   80401 start.go:83] releasing machines lock for "no-preload-066175", held for 16.683615057s
	I0717 18:40:02.249442   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.249735   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:02.252545   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.252896   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.252929   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.253053   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253516   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253700   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253770   80401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:02.253803   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.253913   80401 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:02.253937   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.256152   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256462   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.256501   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256558   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.256616   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256718   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.256879   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.257013   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.257021   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.257038   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.257158   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.257312   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.257469   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.257604   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.376103   80401 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:02.381639   80401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:02.529357   80401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:02.536396   80401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:02.536463   80401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:02.555045   80401 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:02.555067   80401 start.go:495] detecting cgroup driver to use...
	I0717 18:40:02.555130   80401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:02.570540   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:02.583804   80401 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:02.583867   80401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:02.596657   80401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:02.610371   80401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:02.717489   80401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:02.875146   80401 docker.go:233] disabling docker service ...
	I0717 18:40:02.875235   80401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:02.895657   80401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:02.908366   80401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:03.018375   80401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:03.143922   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:03.160599   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:03.180643   80401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 18:40:03.180709   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.190040   80401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:03.190097   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.199275   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.208647   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.217750   80401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:03.226808   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.235779   80401 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.251451   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.261476   80401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:03.269978   80401 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:03.270028   80401 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:03.280901   80401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:03.290184   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:03.409167   80401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:03.541153   80401 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:03.541218   80401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:03.546012   80401 start.go:563] Will wait 60s for crictl version
	I0717 18:40:03.546059   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:03.549567   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:03.588396   80401 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:03.588467   80401 ssh_runner.go:195] Run: crio --version
	I0717 18:40:03.622472   80401 ssh_runner.go:195] Run: crio --version
	I0717 18:40:03.652180   80401 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 18:40:03.653613   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:03.656560   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:03.656959   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:03.656987   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:03.657222   80401 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:03.661102   80401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:03.673078   80401 kubeadm.go:883] updating cluster {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:03.673212   80401 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 18:40:03.673248   80401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:03.703959   80401 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 18:40:03.703986   80401 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:40:03.704042   80401 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:03.704078   80401 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.704095   80401 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.704114   80401 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.704150   80401 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.704077   80401 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.704168   80401 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 18:40:03.704243   80401 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.705787   80401 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:03.705795   80401 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.705801   80401 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.705787   80401 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.705792   80401 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.705816   80401 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.705829   80401 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 18:40:03.706094   80401 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.925413   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.930827   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 18:40:03.963901   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.964215   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.966162   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.970852   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.973664   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.997849   80401 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 18:40:03.997912   80401 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.997969   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118851   80401 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 18:40:04.118888   80401 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:04.118892   80401 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 18:40:04.118924   80401 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:04.118934   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118943   80401 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 18:40:04.118969   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118969   80401 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:04.119001   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.119027   80401 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 18:40:04.119058   80401 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:04.119089   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:04.119104   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.119065   80401 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 18:40:04.119136   80401 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:04.119159   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:02.275985   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .Start
	I0717 18:40:02.276143   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring networks are active...
	I0717 18:40:02.276898   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network default is active
	I0717 18:40:02.277333   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network mk-old-k8s-version-019549 is active
	I0717 18:40:02.277796   80857 main.go:141] libmachine: (old-k8s-version-019549) Getting domain xml...
	I0717 18:40:02.278481   80857 main.go:141] libmachine: (old-k8s-version-019549) Creating domain...
	I0717 18:40:03.571325   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting to get IP...
	I0717 18:40:03.572359   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.572836   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.572968   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.572816   81751 retry.go:31] will retry after 301.991284ms: waiting for machine to come up
	I0717 18:40:03.876263   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.876688   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.876715   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.876637   81751 retry.go:31] will retry after 286.461163ms: waiting for machine to come up
	I0717 18:40:04.165366   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.165873   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.165902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.165811   81751 retry.go:31] will retry after 383.479108ms: waiting for machine to come up
	I0717 18:40:04.551152   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.551615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.551650   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.551589   81751 retry.go:31] will retry after 429.076714ms: waiting for machine to come up
	I0717 18:40:04.982157   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.982517   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.982545   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.982470   81751 retry.go:31] will retry after 553.684035ms: waiting for machine to come up
	I0717 18:40:04.122952   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:04.130590   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:04.130741   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:04.200609   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:04.200631   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:04.200643   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 18:40:04.200728   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:04.200741   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.200815   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.212034   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 18:40:04.212057   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 18:40:04.212113   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:04.212123   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:04.259447   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:04.259525   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 18:40:04.259548   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.259552   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:04.259553   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 18:40:04.259534   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:04.259588   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.259591   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 18:40:04.259628   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 18:40:04.259639   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:04.550060   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:06.236639   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.976976668s)
	I0717 18:40:06.236683   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 18:40:06.236691   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.97711629s)
	I0717 18:40:06.236718   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 18:40:06.236732   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.977125153s)
	I0717 18:40:06.236752   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 18:40:06.236776   80401 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:06.236854   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:06.236781   80401 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.68669473s)
	I0717 18:40:06.236908   80401 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 18:40:06.236951   80401 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:06.236994   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:08.107122   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.870244887s)
	I0717 18:40:08.107152   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 18:40:08.107175   80401 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:08.107203   80401 ssh_runner.go:235] Completed: which crictl: (1.870188554s)
	I0717 18:40:08.107224   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:08.107261   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:08.146817   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 18:40:08.146932   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:05.538229   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:05.538753   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:05.538777   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:05.538702   81751 retry.go:31] will retry after 747.130907ms: waiting for machine to come up
	I0717 18:40:06.287146   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:06.287626   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:06.287665   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:06.287581   81751 retry.go:31] will retry after 1.171580264s: waiting for machine to come up
	I0717 18:40:07.461393   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:07.462015   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:07.462046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:07.461963   81751 retry.go:31] will retry after 1.199265198s: waiting for machine to come up
	I0717 18:40:08.663340   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:08.663789   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:08.663815   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:08.663745   81751 retry.go:31] will retry after 1.621895351s: waiting for machine to come up
	I0717 18:40:11.404193   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.296944718s)
	I0717 18:40:11.404228   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 18:40:11.404248   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:11.404245   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.257289666s)
	I0717 18:40:11.404272   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 18:40:11.404294   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:13.370389   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966067238s)
	I0717 18:40:13.370426   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 18:40:13.370455   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:13.370505   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:10.287596   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:10.288019   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:10.288046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:10.287964   81751 retry.go:31] will retry after 1.748504204s: waiting for machine to come up
	I0717 18:40:12.038137   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:12.038582   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:12.038615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:12.038532   81751 retry.go:31] will retry after 2.477996004s: waiting for machine to come up
	I0717 18:40:14.517788   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:14.518175   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:14.518203   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:14.518123   81751 retry.go:31] will retry after 3.29313184s: waiting for machine to come up
	I0717 18:40:19.093608   81068 start.go:364] duration metric: took 3m4.523289209s to acquireMachinesLock for "default-k8s-diff-port-022930"
	I0717 18:40:19.093694   81068 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:19.093705   81068 fix.go:54] fixHost starting: 
	I0717 18:40:19.094122   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:19.094157   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:19.113793   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0717 18:40:19.114236   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:19.114755   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:40:19.114775   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:19.115110   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:19.115294   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:19.115434   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:40:19.117072   81068 fix.go:112] recreateIfNeeded on default-k8s-diff-port-022930: state=Stopped err=<nil>
	I0717 18:40:19.117109   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	W0717 18:40:19.117256   81068 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:19.120986   81068 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-022930" ...
	I0717 18:40:15.214734   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.844202729s)
	I0717 18:40:15.214756   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 18:40:15.214777   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:15.214814   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:17.066570   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.851726063s)
	I0717 18:40:17.066604   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 18:40:17.066629   80401 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:17.066679   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:17.703556   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 18:40:17.703614   80401 cache_images.go:123] Successfully loaded all cached images
	I0717 18:40:17.703624   80401 cache_images.go:92] duration metric: took 13.999623105s to LoadCachedImages
	I0717 18:40:17.703638   80401 kubeadm.go:934] updating node { 192.168.72.216 8443 v1.31.0-beta.0 crio true true} ...
	I0717 18:40:17.703754   80401 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-066175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:17.703830   80401 ssh_runner.go:195] Run: crio config
	I0717 18:40:17.753110   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:40:17.753138   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:17.753159   80401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:17.753190   80401 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.216 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-066175 NodeName:no-preload-066175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:40:17.753404   80401 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-066175"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:17.753492   80401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 18:40:17.763417   80401 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:17.763491   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:17.772139   80401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 18:40:17.786982   80401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 18:40:17.801327   80401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 18:40:17.816796   80401 ssh_runner.go:195] Run: grep 192.168.72.216	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:17.820354   80401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:17.834155   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:17.970222   80401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:17.989953   80401 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175 for IP: 192.168.72.216
	I0717 18:40:17.989977   80401 certs.go:194] generating shared ca certs ...
	I0717 18:40:17.989998   80401 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:17.990160   80401 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:17.990217   80401 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:17.990231   80401 certs.go:256] generating profile certs ...
	I0717 18:40:17.990365   80401 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.key
	I0717 18:40:17.990460   80401 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672
	I0717 18:40:17.990509   80401 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key
	I0717 18:40:17.990679   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:17.990723   80401 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:17.990740   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:17.990772   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:17.990813   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:17.990846   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:17.990905   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:17.991590   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:18.035349   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:18.079539   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:18.110382   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:18.135920   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:40:18.168675   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:18.196132   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:18.230418   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:40:18.254319   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:18.277293   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:18.301416   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:18.330021   80401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:18.348803   80401 ssh_runner.go:195] Run: openssl version
	I0717 18:40:18.355126   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:18.366004   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.370221   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.370287   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.375799   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:18.385991   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:18.396141   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.400451   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.400526   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.406203   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:18.419059   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:18.429450   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.433742   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.433794   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.439261   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:18.450327   80401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:18.454734   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:18.460256   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:18.465766   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:18.471349   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:18.476780   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:18.482509   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:18.488138   80401 kubeadm.go:392] StartCluster: {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:18.488229   80401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:18.488270   80401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:18.532219   80401 cri.go:89] found id: ""
	I0717 18:40:18.532318   80401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:18.542632   80401 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:18.542655   80401 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:18.542699   80401 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:18.552352   80401 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:18.553351   80401 kubeconfig.go:125] found "no-preload-066175" server: "https://192.168.72.216:8443"
	I0717 18:40:18.555295   80401 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:18.565857   80401 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.216
	I0717 18:40:18.565892   80401 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:18.565905   80401 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:18.565958   80401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:18.605512   80401 cri.go:89] found id: ""
	I0717 18:40:18.605593   80401 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:18.622235   80401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:18.633175   80401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:18.633196   80401 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:18.633241   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:40:18.641969   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:18.642023   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:18.651017   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:40:18.659619   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:18.659667   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:18.668008   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:40:18.675985   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:18.676037   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:18.685937   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:40:18.695574   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:18.695624   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:18.706040   80401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:18.717397   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:18.836009   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:19.122366   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Start
	I0717 18:40:19.122530   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring networks are active...
	I0717 18:40:19.123330   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring network default is active
	I0717 18:40:19.123832   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring network mk-default-k8s-diff-port-022930 is active
	I0717 18:40:19.124268   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Getting domain xml...
	I0717 18:40:19.124922   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Creating domain...
	I0717 18:40:17.813673   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814213   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has current primary IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814242   80857 main.go:141] libmachine: (old-k8s-version-019549) Found IP for machine: 192.168.39.128
	I0717 18:40:17.814277   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserving static IP address...
	I0717 18:40:17.814720   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserved static IP address: 192.168.39.128
	I0717 18:40:17.814738   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting for SSH to be available...
	I0717 18:40:17.814762   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.814783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | skip adding static IP to network mk-old-k8s-version-019549 - found existing host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"}
	I0717 18:40:17.814796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Getting to WaitForSSH function...
	I0717 18:40:17.817314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817714   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.817743   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH client type: external
	I0717 18:40:17.817944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa (-rw-------)
	I0717 18:40:17.817971   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:17.817984   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | About to run SSH command:
	I0717 18:40:17.818000   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | exit 0
	I0717 18:40:17.945902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:17.946262   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetConfigRaw
	I0717 18:40:17.946907   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:17.949757   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950158   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.950178   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950474   80857 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/config.json ...
	I0717 18:40:17.950706   80857 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:17.950728   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:17.950941   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:17.953738   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954141   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.954184   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954282   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:17.954456   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954617   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954790   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:17.954957   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:17.955121   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:17.955131   80857 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:18.061082   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:18.061113   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061405   80857 buildroot.go:166] provisioning hostname "old-k8s-version-019549"
	I0717 18:40:18.061432   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061685   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.064855   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.065348   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065537   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.065777   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.065929   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.066118   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.066329   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.066547   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.066564   80857 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-019549 && echo "old-k8s-version-019549" | sudo tee /etc/hostname
	I0717 18:40:18.191467   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-019549
	
	I0717 18:40:18.191517   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.194917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195455   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.195502   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195714   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.195908   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196105   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196288   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.196483   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.196708   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.196731   80857 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-019549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-019549/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-019549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:18.315020   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:18.315047   80857 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:18.315065   80857 buildroot.go:174] setting up certificates
	I0717 18:40:18.315078   80857 provision.go:84] configureAuth start
	I0717 18:40:18.315090   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.315358   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:18.318342   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.318796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.318826   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.319078   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.321562   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.321914   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.321944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.322125   80857 provision.go:143] copyHostCerts
	I0717 18:40:18.322208   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:18.322226   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:18.322309   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:18.322443   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:18.322457   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:18.322492   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:18.322579   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:18.322591   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:18.322621   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:18.322727   80857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-019549 san=[127.0.0.1 192.168.39.128 localhost minikube old-k8s-version-019549]
	I0717 18:40:18.397216   80857 provision.go:177] copyRemoteCerts
	I0717 18:40:18.397266   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:18.397301   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.399887   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400237   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.400286   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400531   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.400732   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.400880   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.401017   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.490677   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:18.518392   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 18:40:18.543930   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:18.567339   80857 provision.go:87] duration metric: took 252.250106ms to configureAuth
	I0717 18:40:18.567360   80857 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:18.567539   80857 config.go:182] Loaded profile config "old-k8s-version-019549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 18:40:18.567610   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.570373   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.570809   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570943   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.571140   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571281   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.571624   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.571841   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.571862   80857 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:18.845725   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:18.845752   80857 machine.go:97] duration metric: took 895.03234ms to provisionDockerMachine
	I0717 18:40:18.845765   80857 start.go:293] postStartSetup for "old-k8s-version-019549" (driver="kvm2")
	I0717 18:40:18.845778   80857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:18.845828   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:18.846158   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:18.846192   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.848760   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849264   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.849293   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.849649   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.849843   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.850007   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.938026   80857 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:18.943223   80857 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:18.943254   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:18.943317   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:18.943417   80857 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:18.943509   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:18.954887   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:18.976980   80857 start.go:296] duration metric: took 131.200877ms for postStartSetup
	I0717 18:40:18.977022   80857 fix.go:56] duration metric: took 16.727466541s for fixHost
	I0717 18:40:18.977041   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.980020   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980384   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.980417   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980533   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.980723   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.980903   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.981059   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.981207   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.981406   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.981418   80857 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:19.093409   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241619.063415252
	
	I0717 18:40:19.093433   80857 fix.go:216] guest clock: 1721241619.063415252
	I0717 18:40:19.093443   80857 fix.go:229] Guest: 2024-07-17 18:40:19.063415252 +0000 UTC Remote: 2024-07-17 18:40:18.97702579 +0000 UTC m=+213.960604949 (delta=86.389462ms)
	I0717 18:40:19.093494   80857 fix.go:200] guest clock delta is within tolerance: 86.389462ms
	I0717 18:40:19.093506   80857 start.go:83] releasing machines lock for "old-k8s-version-019549", held for 16.843984035s
	I0717 18:40:19.093543   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.093842   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:19.096443   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.096817   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.096848   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.097035   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097579   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097769   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097859   80857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:19.097915   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.098007   80857 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:19.098031   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.100775   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101108   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101160   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101185   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101412   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101595   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.101606   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101637   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101718   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.101789   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101853   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.101975   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.102092   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.102212   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.218596   80857 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:19.225675   80857 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:19.371453   80857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:19.381365   80857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:19.381438   80857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:19.397504   80857 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:19.397530   80857 start.go:495] detecting cgroup driver to use...
	I0717 18:40:19.397597   80857 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:19.412150   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:19.425495   80857 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:19.425578   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:19.438662   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:19.451953   80857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:19.578702   80857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:19.733328   80857 docker.go:233] disabling docker service ...
	I0717 18:40:19.733411   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:19.753615   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:19.774057   80857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:19.933901   80857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:20.049914   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:20.063500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:20.082560   80857 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 18:40:20.082611   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.092857   80857 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:20.092912   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.103283   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.112612   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.122671   80857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:20.132892   80857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:20.145445   80857 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:20.145501   80857 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:20.158958   80857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:20.168377   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:20.307224   80857 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:20.453407   80857 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:20.453490   80857 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:20.458007   80857 start.go:563] Will wait 60s for crictl version
	I0717 18:40:20.458062   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:20.461420   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:20.507358   80857 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:20.507426   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.542812   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.577280   80857 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 18:40:20.432028   80401 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.59597321s)
	I0717 18:40:20.432063   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.633854   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.728474   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.879989   80401 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:20.880079   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.380421   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.880208   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.912390   80401 api_server.go:72] duration metric: took 1.032400417s to wait for apiserver process to appear ...
	I0717 18:40:21.912419   80401 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:40:21.912443   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:21.912904   80401 api_server.go:269] stopped: https://192.168.72.216:8443/healthz: Get "https://192.168.72.216:8443/healthz": dial tcp 192.168.72.216:8443: connect: connection refused
	I0717 18:40:22.412598   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:20.397025   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting to get IP...
	I0717 18:40:20.398122   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.398525   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.398610   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.398506   81910 retry.go:31] will retry after 285.646022ms: waiting for machine to come up
	I0717 18:40:20.686556   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.687151   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.687263   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.687202   81910 retry.go:31] will retry after 239.996ms: waiting for machine to come up
	I0717 18:40:20.928604   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.929111   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.929139   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.929057   81910 retry.go:31] will retry after 487.674422ms: waiting for machine to come up
	I0717 18:40:21.418475   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.418928   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.418952   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:21.418872   81910 retry.go:31] will retry after 439.363216ms: waiting for machine to come up
	I0717 18:40:21.859546   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.860241   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.860273   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:21.860145   81910 retry.go:31] will retry after 598.922134ms: waiting for machine to come up
	I0717 18:40:22.461026   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:22.461509   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:22.461542   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:22.461457   81910 retry.go:31] will retry after 908.602286ms: waiting for machine to come up
	I0717 18:40:23.371582   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:23.372143   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:23.372170   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:23.372093   81910 retry.go:31] will retry after 893.690966ms: waiting for machine to come up
	I0717 18:40:24.267377   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:24.267908   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:24.267935   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:24.267873   81910 retry.go:31] will retry after 1.468061022s: waiting for machine to come up
	I0717 18:40:20.578679   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:20.581569   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.581933   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:20.581961   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.582197   80857 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:20.586047   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:20.598137   80857 kubeadm.go:883] updating cluster {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:20.598284   80857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:40:20.598355   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:20.646681   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:20.646757   80857 ssh_runner.go:195] Run: which lz4
	I0717 18:40:20.650691   80857 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:20.654703   80857 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:20.654730   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 18:40:22.163706   80857 crio.go:462] duration metric: took 1.513040695s to copy over tarball
	I0717 18:40:22.163783   80857 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:40:24.904256   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:24.904292   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:24.904308   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:24.971088   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:24.971120   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:24.971136   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.015832   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.015868   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:25.413309   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.418927   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.418955   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:25.913026   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.917375   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.917407   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:26.412566   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:26.419115   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:26.419140   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:26.912680   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:26.920245   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:26.920268   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:27.412854   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:27.417356   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:27.417390   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:27.912883   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:27.918242   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:27.918274   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:28.412591   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:28.419257   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:40:28.427814   80401 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:40:28.427842   80401 api_server.go:131] duration metric: took 6.515416451s to wait for apiserver health ...
	I0717 18:40:28.427854   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:40:28.427863   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:28.429828   80401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:40:28.431012   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:40:28.444822   80401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:40:28.465212   80401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:40:28.477639   80401 system_pods.go:59] 8 kube-system pods found
	I0717 18:40:28.477691   80401 system_pods.go:61] "coredns-5cfdc65f69-spj2w" [6849b651-9346-4d96-97a7-88eca7bbd50a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:40:28.477706   80401 system_pods.go:61] "etcd-no-preload-066175" [be012488-220b-421d-bf16-a3623fafb8fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:40:28.477721   80401 system_pods.go:61] "kube-apiserver-no-preload-066175" [4292a786-61f3-405d-8784-ec8a58e1b124] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:40:28.477731   80401 system_pods.go:61] "kube-controller-manager-no-preload-066175" [937a48f4-7fca-4cee-bb50-51f1720960da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:40:28.477739   80401 system_pods.go:61] "kube-proxy-tn5xn" [f0a910b3-98b6-470f-a5a2-e49369ecb733] Running
	I0717 18:40:28.477748   80401 system_pods.go:61] "kube-scheduler-no-preload-066175" [ffa2475c-7a5a-4988-89a2-4727e07356cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:40:28.477756   80401 system_pods.go:61] "metrics-server-78fcd8795b-mbtvd" [ccd7a565-52ef-49be-b659-31ae20af537a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:40:28.477761   80401 system_pods.go:61] "storage-provisioner" [19914ecc-2fcc-4cb8-bd78-fb6891dcf85d] Running
	I0717 18:40:28.477769   80401 system_pods.go:74] duration metric: took 12.536267ms to wait for pod list to return data ...
	I0717 18:40:28.477777   80401 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:40:28.482322   80401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:40:28.482348   80401 node_conditions.go:123] node cpu capacity is 2
	I0717 18:40:28.482368   80401 node_conditions.go:105] duration metric: took 4.585233ms to run NodePressure ...
	I0717 18:40:28.482387   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:28.768656   80401 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:40:28.773308   80401 kubeadm.go:739] kubelet initialised
	I0717 18:40:28.773330   80401 kubeadm.go:740] duration metric: took 4.654448ms waiting for restarted kubelet to initialise ...
	I0717 18:40:28.773338   80401 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:40:28.778778   80401 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:25.738071   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:25.738580   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:25.738611   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:25.738538   81910 retry.go:31] will retry after 1.505740804s: waiting for machine to come up
	I0717 18:40:27.246293   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:27.246651   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:27.246674   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:27.246606   81910 retry.go:31] will retry after 1.574253799s: waiting for machine to come up
	I0717 18:40:28.822159   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:28.822546   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:28.822597   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:28.822517   81910 retry.go:31] will retry after 2.132842884s: waiting for machine to come up
	I0717 18:40:25.307875   80857 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.144060111s)
	I0717 18:40:25.307903   80857 crio.go:469] duration metric: took 3.144169984s to extract the tarball
	I0717 18:40:25.307914   80857 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:40:25.354436   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:25.404799   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:25.404827   80857 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:40:25.404884   80857 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.404936   80857 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 18:40:25.404908   80857 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.404952   80857 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.404998   80857 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.405010   80857 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.406661   80857 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.406667   80857 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 18:40:25.406690   80857 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.407119   80857 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.619950   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 18:40:25.635075   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.641561   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.647362   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.648054   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.649684   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.664183   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.709163   80857 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 18:40:25.709227   80857 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 18:40:25.709275   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.760931   80857 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 18:40:25.760994   80857 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.761042   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.779324   80857 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 18:40:25.779378   80857 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.779429   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799052   80857 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 18:40:25.799097   80857 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.799106   80857 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 18:40:25.799131   80857 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 18:40:25.799190   80857 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.799233   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799136   80857 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.799148   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799298   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.806973   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 18:40:25.807041   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.807066   80857 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 18:40:25.807095   80857 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.807126   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.807137   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.807237   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.811025   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.811114   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.935792   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 18:40:25.935853   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 18:40:25.935863   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 18:40:25.935934   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.935973   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 18:40:25.935996   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 18:40:25.940351   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 18:40:25.970107   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 18:40:26.231894   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:26.372230   80857 cache_images.go:92] duration metric: took 967.383323ms to LoadCachedImages
	W0717 18:40:26.372327   80857 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0717 18:40:26.372346   80857 kubeadm.go:934] updating node { 192.168.39.128 8443 v1.20.0 crio true true} ...
	I0717 18:40:26.372517   80857 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-019549 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:26.372613   80857 ssh_runner.go:195] Run: crio config
	I0717 18:40:26.416155   80857 cni.go:84] Creating CNI manager for ""
	I0717 18:40:26.416181   80857 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:26.416196   80857 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:26.416229   80857 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-019549 NodeName:old-k8s-version-019549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 18:40:26.416526   80857 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-019549"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:26.416595   80857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 18:40:26.426941   80857 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:26.427006   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:26.437810   80857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 18:40:26.460046   80857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:40:26.482521   80857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 18:40:26.502536   80857 ssh_runner.go:195] Run: grep 192.168.39.128	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:26.506513   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:26.520895   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:26.648931   80857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:26.665278   80857 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549 for IP: 192.168.39.128
	I0717 18:40:26.665300   80857 certs.go:194] generating shared ca certs ...
	I0717 18:40:26.665329   80857 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:26.665508   80857 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:26.665561   80857 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:26.665574   80857 certs.go:256] generating profile certs ...
	I0717 18:40:26.665693   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/client.key
	I0717 18:40:26.665780   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key.9c9b0a7e
	I0717 18:40:26.665836   80857 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key
	I0717 18:40:26.665998   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:26.666049   80857 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:26.666063   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:26.666095   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:26.666128   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:26.666167   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:26.666225   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:26.667047   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:26.713984   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:26.742617   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:26.770441   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:26.795098   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 18:40:26.825038   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:26.861300   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:26.901664   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:40:26.926357   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:26.948986   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:26.973248   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:26.994642   80857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:27.010158   80857 ssh_runner.go:195] Run: openssl version
	I0717 18:40:27.015861   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:27.026221   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030496   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030567   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.035862   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:27.046312   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:27.057117   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061775   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061824   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.067535   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:27.079022   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:27.090009   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094688   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094768   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.100404   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:27.110653   80857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:27.115117   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:27.120633   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:27.126070   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:27.131500   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:27.137035   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:27.142426   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:27.147638   80857 kubeadm.go:392] StartCluster: {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:27.147756   80857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:27.147816   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.187433   80857 cri.go:89] found id: ""
	I0717 18:40:27.187498   80857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:27.197001   80857 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:27.197020   80857 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:27.197070   80857 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:27.206758   80857 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:27.207822   80857 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-019549" does not appear in /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:40:27.208505   80857 kubeconfig.go:62] /home/jenkins/minikube-integration/19283-14386/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-019549" cluster setting kubeconfig missing "old-k8s-version-019549" context setting]
	I0717 18:40:27.209497   80857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:27.212786   80857 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:27.222612   80857 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.128
	I0717 18:40:27.222649   80857 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:27.222663   80857 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:27.222721   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.268127   80857 cri.go:89] found id: ""
	I0717 18:40:27.268205   80857 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:27.284334   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:27.293669   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:27.293691   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:27.293743   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:40:27.305348   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:27.305437   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:27.317749   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:40:27.328481   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:27.328547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:27.337574   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.346242   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:27.346299   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.354946   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:40:27.363296   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:27.363350   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:27.371925   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:27.384020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:27.571539   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:28.767574   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.19599736s)
	I0717 18:40:28.767612   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.011512   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.151980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.258796   80857 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:29.258886   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:29.759072   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.787614   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:33.285208   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:30.956634   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:30.957109   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:30.957140   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:30.957059   81910 retry.go:31] will retry after 3.31337478s: waiting for machine to come up
	I0717 18:40:34.272528   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:34.273063   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:34.273094   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:34.273032   81910 retry.go:31] will retry after 3.207729964s: waiting for machine to come up
	I0717 18:40:30.259921   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.758948   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.258967   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.759872   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.259187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.759299   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.259080   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.759583   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.259740   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.759068   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.697183   80180 start.go:364] duration metric: took 48.129837953s to acquireMachinesLock for "embed-certs-527415"
	I0717 18:40:38.697248   80180 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:38.697260   80180 fix.go:54] fixHost starting: 
	I0717 18:40:38.697680   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:38.697712   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:38.713575   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0717 18:40:38.713926   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:38.714396   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:40:38.714422   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:38.714762   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:38.714949   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:38.715109   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:40:38.716552   80180 fix.go:112] recreateIfNeeded on embed-certs-527415: state=Stopped err=<nil>
	I0717 18:40:38.716574   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	W0717 18:40:38.716775   80180 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:38.718610   80180 out.go:177] * Restarting existing kvm2 VM for "embed-certs-527415" ...
	I0717 18:40:35.285888   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:36.285651   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:36.285676   80401 pod_ready.go:81] duration metric: took 7.506876819s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.285686   80401 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.292615   80401 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:36.292638   80401 pod_ready.go:81] duration metric: took 6.944487ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.292650   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:38.298338   80401 pod_ready.go:102] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:37.484312   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.484723   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has current primary IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.484740   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Found IP for machine: 192.168.50.245
	I0717 18:40:37.484753   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Reserving static IP address...
	I0717 18:40:37.485137   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-022930", mac: "52:54:00:5d:76:ae", ip: "192.168.50.245"} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.485161   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Reserved static IP address: 192.168.50.245
	I0717 18:40:37.485174   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | skip adding static IP to network mk-default-k8s-diff-port-022930 - found existing host DHCP lease matching {name: "default-k8s-diff-port-022930", mac: "52:54:00:5d:76:ae", ip: "192.168.50.245"}
	I0717 18:40:37.485191   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Getting to WaitForSSH function...
	I0717 18:40:37.485207   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for SSH to be available...
	I0717 18:40:37.487397   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.487767   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.487796   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.487899   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Using SSH client type: external
	I0717 18:40:37.487927   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa (-rw-------)
	I0717 18:40:37.487961   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:37.487973   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | About to run SSH command:
	I0717 18:40:37.487992   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | exit 0
	I0717 18:40:37.608746   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:37.609085   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetConfigRaw
	I0717 18:40:37.609739   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:37.612293   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.612668   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.612689   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.612936   81068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/config.json ...
	I0717 18:40:37.613176   81068 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:37.613194   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:37.613391   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.615483   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.615774   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.615804   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.615881   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.616038   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.616187   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.616306   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.616470   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.616676   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.616691   81068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:37.720971   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:37.721004   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.721307   81068 buildroot.go:166] provisioning hostname "default-k8s-diff-port-022930"
	I0717 18:40:37.721340   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.721654   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.724162   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.724507   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.724535   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.724712   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.724912   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.725090   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.725259   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.725430   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.725635   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.725651   81068 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-022930 && echo "default-k8s-diff-port-022930" | sudo tee /etc/hostname
	I0717 18:40:37.837366   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-022930
	
	I0717 18:40:37.837389   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.839920   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.840291   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.840325   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.840450   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.840654   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.840830   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.840970   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.841130   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.841344   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.841363   81068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-022930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-022930/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-022930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:37.948311   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:37.948343   81068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:37.948394   81068 buildroot.go:174] setting up certificates
	I0717 18:40:37.948406   81068 provision.go:84] configureAuth start
	I0717 18:40:37.948416   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.948732   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:37.951214   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.951548   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.951578   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.951693   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.953805   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.954086   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.954105   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.954250   81068 provision.go:143] copyHostCerts
	I0717 18:40:37.954318   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:37.954334   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:37.954401   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:37.954531   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:37.954542   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:37.954575   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:37.954657   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:37.954667   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:37.954694   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:37.954758   81068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-022930 san=[127.0.0.1 192.168.50.245 default-k8s-diff-port-022930 localhost minikube]
	I0717 18:40:38.054084   81068 provision.go:177] copyRemoteCerts
	I0717 18:40:38.054136   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:38.054160   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.056841   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.057265   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.057300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.057483   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.057683   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.057839   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.057982   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.138206   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:38.163105   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 18:40:38.188449   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:38.214829   81068 provision.go:87] duration metric: took 266.409028ms to configureAuth
	I0717 18:40:38.214853   81068 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:38.215005   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:40:38.215068   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.217684   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.218010   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.218037   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.218247   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.218419   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.218573   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.218706   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.218874   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:38.219021   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:38.219039   81068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:38.471162   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:38.471191   81068 machine.go:97] duration metric: took 858.000457ms to provisionDockerMachine
	I0717 18:40:38.471206   81068 start.go:293] postStartSetup for "default-k8s-diff-port-022930" (driver="kvm2")
	I0717 18:40:38.471220   81068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:38.471247   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.471558   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:38.471590   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.474241   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.474673   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.474704   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.474868   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.475085   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.475245   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.475524   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.554800   81068 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:38.558601   81068 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:38.558624   81068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:38.558685   81068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:38.558769   81068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:38.558875   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:38.567664   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:38.589713   81068 start.go:296] duration metric: took 118.491854ms for postStartSetup
	I0717 18:40:38.589754   81068 fix.go:56] duration metric: took 19.496049651s for fixHost
	I0717 18:40:38.589777   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.592433   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.592813   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.592860   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.592989   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.593188   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.593368   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.593536   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.593738   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:38.593937   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:38.593955   81068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:38.697050   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241638.669121206
	
	I0717 18:40:38.697075   81068 fix.go:216] guest clock: 1721241638.669121206
	I0717 18:40:38.697085   81068 fix.go:229] Guest: 2024-07-17 18:40:38.669121206 +0000 UTC Remote: 2024-07-17 18:40:38.589759024 +0000 UTC m=+204.149894792 (delta=79.362182ms)
	I0717 18:40:38.697108   81068 fix.go:200] guest clock delta is within tolerance: 79.362182ms
	I0717 18:40:38.697118   81068 start.go:83] releasing machines lock for "default-k8s-diff-port-022930", held for 19.603450588s
	I0717 18:40:38.697143   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.697381   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:38.700059   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.700504   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.700529   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.700764   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701246   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701541   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701619   81068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:38.701672   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.701777   81068 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:38.701797   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.704169   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704478   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.704503   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704657   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704684   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.704849   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.705002   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.705164   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.705262   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.705300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.705496   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.705663   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.705817   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.705967   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.825607   81068 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:38.831484   81068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:38.972775   81068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:38.978446   81068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:38.978502   81068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:38.999160   81068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:38.999180   81068 start.go:495] detecting cgroup driver to use...
	I0717 18:40:38.999234   81068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:39.016133   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:39.029031   81068 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:39.029083   81068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:39.042835   81068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:39.056981   81068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:39.168521   81068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:39.306630   81068 docker.go:233] disabling docker service ...
	I0717 18:40:39.306704   81068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:39.320435   81068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:39.337780   81068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:35.259643   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:35.759432   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.259818   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.759627   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.259968   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.758933   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.259980   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.759776   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.259988   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.496847   81068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:39.627783   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:39.641684   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:39.659183   81068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:40:39.659250   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.669034   81068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:39.669100   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.678708   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.688822   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.699484   81068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:39.709505   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.720715   81068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.736510   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.746991   81068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:39.757265   81068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:39.757320   81068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:39.774777   81068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:39.789593   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:39.907377   81068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:40.039498   81068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:40.039592   81068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:40.044502   81068 start.go:563] Will wait 60s for crictl version
	I0717 18:40:40.044558   81068 ssh_runner.go:195] Run: which crictl
	I0717 18:40:40.048708   81068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:40.087738   81068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:40.087822   81068 ssh_runner.go:195] Run: crio --version
	I0717 18:40:40.115460   81068 ssh_runner.go:195] Run: crio --version
	I0717 18:40:40.150181   81068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:40:38.719828   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Start
	I0717 18:40:38.720004   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring networks are active...
	I0717 18:40:38.720983   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring network default is active
	I0717 18:40:38.721537   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring network mk-embed-certs-527415 is active
	I0717 18:40:38.721945   80180 main.go:141] libmachine: (embed-certs-527415) Getting domain xml...
	I0717 18:40:38.722654   80180 main.go:141] libmachine: (embed-certs-527415) Creating domain...
	I0717 18:40:40.007036   80180 main.go:141] libmachine: (embed-certs-527415) Waiting to get IP...
	I0717 18:40:40.007975   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.008511   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.008608   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.008495   82069 retry.go:31] will retry after 268.334211ms: waiting for machine to come up
	I0717 18:40:40.278129   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.278639   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.278670   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.278585   82069 retry.go:31] will retry after 350.00147ms: waiting for machine to come up
	I0717 18:40:40.630229   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.630819   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.630853   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.630768   82069 retry.go:31] will retry after 411.079615ms: waiting for machine to come up
	I0717 18:40:41.043232   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.043851   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.043880   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.043822   82069 retry.go:31] will retry after 387.726284ms: waiting for machine to come up
	I0717 18:40:41.433536   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.434058   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.434092   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.434005   82069 retry.go:31] will retry after 538.564385ms: waiting for machine to come up
	I0717 18:40:41.973917   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.974457   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.974489   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.974395   82069 retry.go:31] will retry after 778.576616ms: waiting for machine to come up
	I0717 18:40:42.754322   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:42.754872   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:42.754899   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:42.754837   82069 retry.go:31] will retry after 758.957234ms: waiting for machine to come up
	I0717 18:40:40.299673   80401 pod_ready.go:102] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:40.801297   80401 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.801325   80401 pod_ready.go:81] duration metric: took 4.508666316s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.801339   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.807354   80401 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.807372   80401 pod_ready.go:81] duration metric: took 6.024916ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.807380   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.812934   80401 pod_ready.go:92] pod "kube-proxy-tn5xn" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.812982   80401 pod_ready.go:81] duration metric: took 5.594378ms for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.812996   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.817940   80401 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.817969   80401 pod_ready.go:81] duration metric: took 4.96427ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.817982   80401 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:42.825018   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:40.151220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:40.153791   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:40.154220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:40.154246   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:40.154472   81068 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:40.159310   81068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:40.172121   81068 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:40.172256   81068 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:40:40.172307   81068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:40.215863   81068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:40:40.215940   81068 ssh_runner.go:195] Run: which lz4
	I0717 18:40:40.220502   81068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:40.224682   81068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:40.224714   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:40:41.511505   81068 crio.go:462] duration metric: took 1.291039238s to copy over tarball
	I0717 18:40:41.511574   81068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:40:43.730839   81068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.219230444s)
	I0717 18:40:43.730901   81068 crio.go:469] duration metric: took 2.219370372s to extract the tarball
	I0717 18:40:43.730912   81068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:40:43.767876   81068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:43.809466   81068 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:40:43.809494   81068 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:40:43.809505   81068 kubeadm.go:934] updating node { 192.168.50.245 8444 v1.30.2 crio true true} ...
	I0717 18:40:43.809646   81068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-022930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:43.809740   81068 ssh_runner.go:195] Run: crio config
	I0717 18:40:43.850614   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:40:43.850635   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:43.850648   81068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:43.850669   81068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-022930 NodeName:default-k8s-diff-port-022930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:40:43.850795   81068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-022930"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:43.850851   81068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:40:43.862674   81068 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:43.862733   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:43.873304   81068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 18:40:43.888884   81068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:40:43.903631   81068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 18:40:43.918768   81068 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:43.922033   81068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:43.932546   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:44.049621   81068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:44.065718   81068 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930 for IP: 192.168.50.245
	I0717 18:40:44.065747   81068 certs.go:194] generating shared ca certs ...
	I0717 18:40:44.065767   81068 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:44.065939   81068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:44.065999   81068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:44.066016   81068 certs.go:256] generating profile certs ...
	I0717 18:40:44.066149   81068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/client.key
	I0717 18:40:44.066224   81068 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.key.8aa7f0a0
	I0717 18:40:44.066284   81068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.key
	I0717 18:40:44.066445   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:44.066494   81068 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:44.066507   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:44.066548   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:44.066579   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:44.066606   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:44.066650   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:44.067421   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:44.104160   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:44.133716   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:44.161170   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:44.190489   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 18:40:44.211792   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:44.232875   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:44.255059   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:40:44.276826   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:44.298357   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:44.320634   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:44.345428   81068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:44.362934   81068 ssh_runner.go:195] Run: openssl version
	I0717 18:40:44.369764   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:44.382557   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.386445   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.386483   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.392033   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:44.401987   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:44.411437   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.415367   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.415419   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.420523   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:44.429915   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:44.439371   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.443248   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.443301   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.448380   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:44.457828   81068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:44.462151   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:44.467474   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:44.472829   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:40.259910   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:40.759917   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.259718   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.759839   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.259129   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.759772   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.259989   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.759724   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.258978   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.759594   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.515097   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:43.515595   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:43.515616   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:43.515539   82069 retry.go:31] will retry after 1.173590835s: waiting for machine to come up
	I0717 18:40:44.691027   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:44.691479   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:44.691520   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:44.691428   82069 retry.go:31] will retry after 1.594704966s: waiting for machine to come up
	I0717 18:40:46.288022   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:46.288609   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:46.288642   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:46.288549   82069 retry.go:31] will retry after 2.014912325s: waiting for machine to come up
	I0717 18:40:45.323815   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:47.324715   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:44.478397   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:44.483860   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:44.489029   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:44.494220   81068 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:44.494329   81068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:44.494381   81068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:44.534380   81068 cri.go:89] found id: ""
	I0717 18:40:44.534445   81068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:44.545270   81068 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:44.545287   81068 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:44.545328   81068 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:44.555521   81068 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:44.556584   81068 kubeconfig.go:125] found "default-k8s-diff-port-022930" server: "https://192.168.50.245:8444"
	I0717 18:40:44.558675   81068 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:44.567696   81068 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.245
	I0717 18:40:44.567727   81068 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:44.567739   81068 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:44.567787   81068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:44.605757   81068 cri.go:89] found id: ""
	I0717 18:40:44.605833   81068 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:44.622187   81068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:44.631169   81068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:44.631191   81068 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:44.631241   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 18:40:44.639194   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:44.639248   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:44.647542   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 18:40:44.655622   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:44.655708   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:44.663923   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 18:40:44.671733   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:44.671778   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:44.680375   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 18:40:44.688043   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:44.688085   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:44.697020   81068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:44.705554   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:44.812051   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.351683   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.559471   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.618086   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.678836   81068 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:45.678926   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.179998   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.679083   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.179084   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.679042   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.179150   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.195192   81068 api_server.go:72] duration metric: took 2.516354411s to wait for apiserver process to appear ...
	I0717 18:40:48.195222   81068 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:40:48.195247   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:45.259185   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:45.759765   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.259009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.759131   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.259477   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.759386   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.259977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.759374   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.259744   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.759440   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.393650   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:50.393688   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:50.393705   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:50.467974   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:50.468000   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:50.696340   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:50.702264   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:50.702308   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:51.195503   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:51.200034   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:51.200060   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:51.695594   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:51.699593   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 200:
	ok
	I0717 18:40:51.706025   81068 api_server.go:141] control plane version: v1.30.2
	I0717 18:40:51.706048   81068 api_server.go:131] duration metric: took 3.510818337s to wait for apiserver health ...
	I0717 18:40:51.706059   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:40:51.706067   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:51.707696   81068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:40:48.305798   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:48.306290   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:48.306323   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:48.306232   82069 retry.go:31] will retry after 1.789943402s: waiting for machine to come up
	I0717 18:40:50.098279   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:50.098771   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:50.098798   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:50.098734   82069 retry.go:31] will retry after 2.765766483s: waiting for machine to come up
	I0717 18:40:52.867667   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:52.868191   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:52.868212   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:52.868139   82069 retry.go:31] will retry after 2.762670644s: waiting for machine to come up
	I0717 18:40:49.325415   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:51.824015   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:53.824980   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:51.708887   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:40:51.718704   81068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:40:51.735711   81068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:40:51.745976   81068 system_pods.go:59] 8 kube-system pods found
	I0717 18:40:51.746009   81068 system_pods.go:61] "coredns-7db6d8ff4d-czk4x" [80cedf0b-248a-458e-994c-81f852d78076] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:40:51.746022   81068 system_pods.go:61] "etcd-default-k8s-diff-port-022930" [f9cf97bf-5fdc-4623-a78c-d29e0352ce40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:40:51.746036   81068 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-022930" [599cef4d-2b4d-4cd5-9552-99de585759eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:40:51.746051   81068 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-022930" [89092470-6fc9-47b2-b680-7c93945d9005] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:40:51.746062   81068 system_pods.go:61] "kube-proxy-hj7ss" [d260f18e-7a01-4f07-8c6a-87e8f6329f79] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 18:40:51.746074   81068 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-022930" [fe098478-fcb6-4084-b773-11c2cbb995aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:40:51.746083   81068 system_pods.go:61] "metrics-server-569cc877fc-j9qhx" [18efb008-e7d3-435e-9156-57c16b454d07] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:40:51.746093   81068 system_pods.go:61] "storage-provisioner" [ac856758-62ca-485f-aa31-5cd1c7d1dbe5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:40:51.746103   81068 system_pods.go:74] duration metric: took 10.373616ms to wait for pod list to return data ...
	I0717 18:40:51.746115   81068 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:40:51.749151   81068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:40:51.749173   81068 node_conditions.go:123] node cpu capacity is 2
	I0717 18:40:51.749185   81068 node_conditions.go:105] duration metric: took 3.061813ms to run NodePressure ...
	I0717 18:40:51.749204   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:52.049486   81068 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:40:52.053636   81068 kubeadm.go:739] kubelet initialised
	I0717 18:40:52.053656   81068 kubeadm.go:740] duration metric: took 4.136528ms waiting for restarted kubelet to initialise ...
	I0717 18:40:52.053665   81068 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:40:52.058401   81068 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.062406   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.062429   81068 pod_ready.go:81] duration metric: took 4.007504ms for pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.062439   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.062454   81068 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.066161   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.066185   81068 pod_ready.go:81] duration metric: took 3.717781ms for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.066202   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.066212   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.070043   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.070064   81068 pod_ready.go:81] duration metric: took 3.840533ms for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.070074   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.070080   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:54.077110   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:50.258977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.259867   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.759826   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.259016   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.759708   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.259589   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.759788   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.259753   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.759841   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.633531   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.633999   80180 main.go:141] libmachine: (embed-certs-527415) Found IP for machine: 192.168.61.90
	I0717 18:40:55.634014   80180 main.go:141] libmachine: (embed-certs-527415) Reserving static IP address...
	I0717 18:40:55.634026   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has current primary IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.634407   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.634438   80180 main.go:141] libmachine: (embed-certs-527415) Reserved static IP address: 192.168.61.90
	I0717 18:40:55.634456   80180 main.go:141] libmachine: (embed-certs-527415) DBG | skip adding static IP to network mk-embed-certs-527415 - found existing host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"}
	I0717 18:40:55.634476   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Getting to WaitForSSH function...
	I0717 18:40:55.634490   80180 main.go:141] libmachine: (embed-certs-527415) Waiting for SSH to be available...
	I0717 18:40:55.636604   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.636877   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.636904   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.637010   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH client type: external
	I0717 18:40:55.637032   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa (-rw-------)
	I0717 18:40:55.637063   80180 main.go:141] libmachine: (embed-certs-527415) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:55.637082   80180 main.go:141] libmachine: (embed-certs-527415) DBG | About to run SSH command:
	I0717 18:40:55.637094   80180 main.go:141] libmachine: (embed-certs-527415) DBG | exit 0
	I0717 18:40:55.765208   80180 main.go:141] libmachine: (embed-certs-527415) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:55.765554   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetConfigRaw
	I0717 18:40:55.766322   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:55.769331   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.769800   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.769827   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.770203   80180 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/config.json ...
	I0717 18:40:55.770593   80180 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:55.770620   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:55.770826   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:55.773837   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.774313   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.774346   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.774553   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:55.774750   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.774909   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.775060   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:55.775277   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:55.775534   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:55.775556   80180 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:55.888982   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:55.889013   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:55.889259   80180 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:40:55.889286   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:55.889501   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:55.891900   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.892284   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.892302   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.892532   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:55.892701   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.892853   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.892993   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:55.893136   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:55.893293   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:55.893310   80180 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-527415 && echo "embed-certs-527415" | sudo tee /etc/hostname
	I0717 18:40:56.018869   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-527415
	
	I0717 18:40:56.018898   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.021591   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.021888   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.021909   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.022286   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.022489   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.022646   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.022765   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.022905   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.023050   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.023066   80180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-527415' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-527415/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-527415' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:56.146411   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:56.146455   80180 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:56.146478   80180 buildroot.go:174] setting up certificates
	I0717 18:40:56.146490   80180 provision.go:84] configureAuth start
	I0717 18:40:56.146502   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:56.146767   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:56.149369   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.149725   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.149755   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.149937   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.152431   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.152753   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.152774   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.152936   80180 provision.go:143] copyHostCerts
	I0717 18:40:56.153028   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:56.153041   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:56.153096   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:56.153186   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:56.153194   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:56.153214   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:56.153277   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:56.153283   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:56.153300   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:56.153349   80180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.embed-certs-527415 san=[127.0.0.1 192.168.61.90 embed-certs-527415 localhost minikube]
	I0717 18:40:56.326978   80180 provision.go:177] copyRemoteCerts
	I0717 18:40:56.327024   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:56.327045   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.329432   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.329778   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.329809   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.329927   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.330121   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.330295   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.330409   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.415173   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:56.438501   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 18:40:56.460520   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:56.481808   80180 provision.go:87] duration metric: took 335.305142ms to configureAuth
	I0717 18:40:56.481832   80180 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:56.482001   80180 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:40:56.482063   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.484653   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.485044   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.485074   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.485222   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.485468   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.485652   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.485810   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.485953   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.486108   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.486123   80180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:56.741135   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:56.741185   80180 machine.go:97] duration metric: took 970.573336ms to provisionDockerMachine
	I0717 18:40:56.741204   80180 start.go:293] postStartSetup for "embed-certs-527415" (driver="kvm2")
	I0717 18:40:56.741221   80180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:56.741245   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.741597   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:56.741625   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.744356   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.744805   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.744831   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.745025   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.745224   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.745382   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.745549   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.835435   80180 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:56.839724   80180 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:56.839753   80180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:56.839834   80180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:56.839945   80180 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:56.840083   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:56.849582   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:56.872278   80180 start.go:296] duration metric: took 131.057656ms for postStartSetup
	I0717 18:40:56.872347   80180 fix.go:56] duration metric: took 18.175085798s for fixHost
	I0717 18:40:56.872375   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.874969   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.875308   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.875340   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.875533   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.875722   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.875955   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.876089   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.876274   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.876459   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.876469   80180 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:56.985888   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241656.959508652
	
	I0717 18:40:56.985907   80180 fix.go:216] guest clock: 1721241656.959508652
	I0717 18:40:56.985914   80180 fix.go:229] Guest: 2024-07-17 18:40:56.959508652 +0000 UTC Remote: 2024-07-17 18:40:56.872354453 +0000 UTC m=+348.896679896 (delta=87.154199ms)
	I0717 18:40:56.985939   80180 fix.go:200] guest clock delta is within tolerance: 87.154199ms
	I0717 18:40:56.985944   80180 start.go:83] releasing machines lock for "embed-certs-527415", held for 18.288718042s
	I0717 18:40:56.985964   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.986210   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:56.988716   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.989086   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.989114   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.989279   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.989786   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.989966   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.990055   80180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:56.990092   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.990360   80180 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:56.990390   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.992519   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992816   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.992835   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992852   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992984   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.993162   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.993212   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.993234   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.993356   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.993401   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.993499   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.993541   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.993754   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.993915   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:57.116598   80180 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:57.122546   80180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:57.268379   80180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:57.274748   80180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:57.274819   80180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:57.290374   80180 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:57.290394   80180 start.go:495] detecting cgroup driver to use...
	I0717 18:40:57.290443   80180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:57.307521   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:57.323478   80180 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:57.323554   80180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:57.337078   80180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:57.350181   80180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:57.463512   80180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:57.626650   80180 docker.go:233] disabling docker service ...
	I0717 18:40:57.626714   80180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:57.641067   80180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:57.655085   80180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:57.802789   80180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:57.919140   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:57.932620   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:57.949471   80180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:40:57.949528   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.960297   80180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:57.960366   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.970890   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.980768   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.990723   80180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:58.000791   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.010332   80180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.026611   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.036106   80180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:58.044742   80180 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:58.044791   80180 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:58.056584   80180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:58.065470   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:58.182119   80180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:58.319330   80180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:58.319400   80180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:58.326361   80180 start.go:563] Will wait 60s for crictl version
	I0717 18:40:58.326405   80180 ssh_runner.go:195] Run: which crictl
	I0717 18:40:58.329951   80180 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:58.366561   80180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:58.366668   80180 ssh_runner.go:195] Run: crio --version
	I0717 18:40:58.398483   80180 ssh_runner.go:195] Run: crio --version
	I0717 18:40:58.427421   80180 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:40:56.324834   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:58.325283   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:56.077315   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:58.077815   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:55.259450   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.759932   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.259395   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.759855   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.259739   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.759436   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.258951   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.759931   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.259588   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.759651   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.428872   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:58.431182   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:58.431554   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:58.431580   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:58.431756   80180 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:58.435914   80180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:58.448777   80180 kubeadm.go:883] updating cluster {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:58.448923   80180 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:40:58.449018   80180 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:58.488011   80180 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:40:58.488077   80180 ssh_runner.go:195] Run: which lz4
	I0717 18:40:58.491828   80180 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:58.495609   80180 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:58.495640   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:40:59.686445   80180 crio.go:462] duration metric: took 1.194619366s to copy over tarball
	I0717 18:40:59.686513   80180 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:41:01.862679   80180 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.176132338s)
	I0717 18:41:01.862710   80180 crio.go:469] duration metric: took 2.176236509s to extract the tarball
	I0717 18:41:01.862719   80180 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:41:01.901813   80180 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:41:01.945403   80180 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:41:01.945429   80180 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:41:01.945438   80180 kubeadm.go:934] updating node { 192.168.61.90 8443 v1.30.2 crio true true} ...
	I0717 18:41:01.945554   80180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-527415 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:41:01.945631   80180 ssh_runner.go:195] Run: crio config
	I0717 18:41:01.991102   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:41:01.991130   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:41:01.991144   80180 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:41:01.991168   80180 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.90 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-527415 NodeName:embed-certs-527415 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:41:01.991331   80180 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-527415"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:41:01.991397   80180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:41:02.001007   80180 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:41:02.001082   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:41:02.010130   80180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0717 18:41:02.025405   80180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:41:02.041167   80180 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0717 18:41:02.057441   80180 ssh_runner.go:195] Run: grep 192.168.61.90	control-plane.minikube.internal$ /etc/hosts
	I0717 18:41:02.060878   80180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:41:02.072984   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:41:02.188194   80180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:41:02.204599   80180 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415 for IP: 192.168.61.90
	I0717 18:41:02.204623   80180 certs.go:194] generating shared ca certs ...
	I0717 18:41:02.204643   80180 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:41:02.204822   80180 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:41:02.204885   80180 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:41:02.204899   80180 certs.go:256] generating profile certs ...
	I0717 18:41:02.205047   80180 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.key
	I0717 18:41:02.205129   80180 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9
	I0717 18:41:02.205188   80180 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key
	I0717 18:41:02.205372   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:41:02.205436   80180 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:41:02.205451   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:41:02.205486   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:41:02.205526   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:41:02.205556   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:41:02.205612   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:41:02.206441   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:41:02.234135   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:41:02.259780   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:41:02.285464   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:41:02.316267   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 18:41:02.348835   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:41:02.375505   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:41:02.402683   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:41:02.426689   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:41:02.449328   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:41:02.472140   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:41:02.494016   80180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:41:02.512612   80180 ssh_runner.go:195] Run: openssl version
	I0717 18:41:02.519908   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:41:02.532706   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.538136   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.538191   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.545493   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:41:02.558832   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:41:02.570455   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.575515   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.575582   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.581428   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:41:02.592439   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:41:02.602823   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.608370   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.608433   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.615367   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:41:02.628355   80180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:41:02.632772   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:41:02.638325   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:41:02.643635   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:41:02.648960   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:41:02.654088   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:41:02.659220   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:41:02.664325   80180 kubeadm.go:392] StartCluster: {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:41:02.664444   80180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:41:02.664495   80180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:41:02.699590   80180 cri.go:89] found id: ""
	I0717 18:41:02.699676   80180 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:41:02.709427   80180 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:41:02.709452   80180 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:41:02.709503   80180 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:41:02.718489   80180 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:41:02.719505   80180 kubeconfig.go:125] found "embed-certs-527415" server: "https://192.168.61.90:8443"
	I0717 18:41:02.721457   80180 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:41:02.730258   80180 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.90
	I0717 18:41:02.730288   80180 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:41:02.730301   80180 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:41:02.730367   80180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:41:02.768268   80180 cri.go:89] found id: ""
	I0717 18:41:02.768339   80180 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:41:02.786699   80180 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:41:02.796888   80180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:41:02.796912   80180 kubeadm.go:157] found existing configuration files:
	
	I0717 18:41:02.796965   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:41:02.805633   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:41:02.805703   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:41:02.817624   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:41:02.827840   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:41:02.827902   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:41:02.836207   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:41:02.844201   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:41:02.844265   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:41:02.852667   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:41:02.860697   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:41:02.860741   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:41:02.869133   80180 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:41:02.877992   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:02.986350   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:00.823447   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:02.825375   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:00.578095   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:02.576899   81068 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.576927   81068 pod_ready.go:81] duration metric: took 10.506835962s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.576953   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hj7ss" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.584912   81068 pod_ready.go:92] pod "kube-proxy-hj7ss" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.584933   81068 pod_ready.go:81] duration metric: took 7.972079ms for pod "kube-proxy-hj7ss" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.584964   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.590342   81068 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.590366   81068 pod_ready.go:81] duration metric: took 5.392364ms for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.590380   81068 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:00.259461   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:00.759148   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.259596   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.759943   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.259670   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.759900   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.259745   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.759843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.259902   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.759850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.874112   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.091026   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.170734   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.292719   80180 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:41:04.292826   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.793710   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.292924   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.792872   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.293626   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.793632   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.810658   80180 api_server.go:72] duration metric: took 2.517938682s to wait for apiserver process to appear ...
	I0717 18:41:06.810685   80180 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:41:06.810705   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:05.323684   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:07.324653   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:04.596794   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:06.597411   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:09.097409   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:05.259624   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.759258   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.259346   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.759041   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.259467   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.759164   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.259047   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.759959   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.259372   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.759259   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.612683   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:41:09.612715   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:41:09.612728   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:09.633949   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:41:09.633975   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:41:09.811272   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:09.815690   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:09.815720   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:10.311256   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:10.319587   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:10.319620   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:10.811133   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:10.815819   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:10.815862   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:11.311037   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:11.315892   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:11.315923   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:11.811534   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:11.816601   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:11.816631   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:12.311178   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:12.315484   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:12.315510   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:12.811068   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:12.821016   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:12.821048   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:13.311166   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:13.315879   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:41:13.322661   80180 api_server.go:141] control plane version: v1.30.2
	I0717 18:41:13.322700   80180 api_server.go:131] duration metric: took 6.512007091s to wait for apiserver health ...
	I0717 18:41:13.322713   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:41:13.322722   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:41:13.324516   80180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:41:09.325535   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:11.325697   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:13.327238   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:11.597479   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:14.098908   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:10.259845   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:10.759671   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.259895   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.759877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.259003   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.759685   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.759844   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.259541   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.759709   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.325935   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:41:13.337601   80180 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:41:13.354366   80180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:41:13.364678   80180 system_pods.go:59] 8 kube-system pods found
	I0717 18:41:13.364715   80180 system_pods.go:61] "coredns-7db6d8ff4d-2fnlb" [86d50e9b-fb88-4332-90c5-a969b0654635] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:41:13.364726   80180 system_pods.go:61] "etcd-embed-certs-527415" [9d8ac0a8-4639-48d8-8ac4-88b0bd1e2082] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:41:13.364735   80180 system_pods.go:61] "kube-apiserver-embed-certs-527415" [7f72c4f9-f1db-4ac6-83e1-2b94245107c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:41:13.364743   80180 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [96081a97-2a90-4fec-84cb-9a399a43aeb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:41:13.364752   80180 system_pods.go:61] "kube-proxy-jltfs" [27f6259e-80cc-4881-bb06-6a2ad529179c] Running
	I0717 18:41:13.364763   80180 system_pods.go:61] "kube-scheduler-embed-certs-527415" [bed7b515-7ab0-460c-a13f-037f29576f30] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:41:13.364775   80180 system_pods.go:61] "metrics-server-569cc877fc-8md44" [1b9d50c8-6ca0-41c3-92d9-eebdccbf1a82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:41:13.364783   80180 system_pods.go:61] "storage-provisioner" [ccb34b69-d28d-477e-8c7a-0acdc547bec7] Running
	I0717 18:41:13.364791   80180 system_pods.go:74] duration metric: took 10.40947ms to wait for pod list to return data ...
	I0717 18:41:13.364803   80180 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:41:13.367687   80180 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:41:13.367712   80180 node_conditions.go:123] node cpu capacity is 2
	I0717 18:41:13.367725   80180 node_conditions.go:105] duration metric: took 2.912986ms to run NodePressure ...
	I0717 18:41:13.367745   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:13.630827   80180 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:41:13.636658   80180 kubeadm.go:739] kubelet initialised
	I0717 18:41:13.636688   80180 kubeadm.go:740] duration metric: took 5.830484ms waiting for restarted kubelet to initialise ...
	I0717 18:41:13.636699   80180 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:41:13.642171   80180 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.650539   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.650573   80180 pod_ready.go:81] duration metric: took 8.374432ms for pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.650585   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.650599   80180 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.655470   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "etcd-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.655500   80180 pod_ready.go:81] duration metric: took 4.8911ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.655512   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "etcd-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.655520   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.662448   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.662479   80180 pod_ready.go:81] duration metric: took 6.949002ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.662490   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.662499   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.757454   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.757485   80180 pod_ready.go:81] duration metric: took 94.976348ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.757494   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.757501   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:14.157339   80180 pod_ready.go:92] pod "kube-proxy-jltfs" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:14.157363   80180 pod_ready.go:81] duration metric: took 399.852649ms for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:14.157381   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:16.163623   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:15.825045   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:18.323440   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:16.596320   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:18.596807   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:15.259558   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:15.759585   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.259850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.760009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.259385   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.759208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.259218   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.759779   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.259666   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.759781   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.174371   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:20.664423   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:22.663932   80180 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:22.663955   80180 pod_ready.go:81] duration metric: took 8.506565077s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:22.663969   80180 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:20.324547   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:22.824318   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:21.096071   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:23.596775   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:20.259286   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:20.759048   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.259801   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.759595   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.259582   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.759871   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.259349   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.759659   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.259964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.759899   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.671105   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:27.170247   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:24.825017   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:26.825067   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:26.096196   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:28.097501   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:25.259559   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:25.759773   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.759924   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.259509   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.759986   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.259792   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.759564   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:29.259060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:29.259143   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:29.298974   80857 cri.go:89] found id: ""
	I0717 18:41:29.299006   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.299016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:29.299024   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:29.299087   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:29.333764   80857 cri.go:89] found id: ""
	I0717 18:41:29.333786   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.333793   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:29.333801   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:29.333849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:29.369639   80857 cri.go:89] found id: ""
	I0717 18:41:29.369674   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.369688   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:29.369697   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:29.369762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:29.403453   80857 cri.go:89] found id: ""
	I0717 18:41:29.403481   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.403489   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:29.403498   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:29.403555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:29.436662   80857 cri.go:89] found id: ""
	I0717 18:41:29.436687   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.436695   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:29.436701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:29.436749   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:29.471013   80857 cri.go:89] found id: ""
	I0717 18:41:29.471053   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.471064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:29.471074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:29.471139   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:29.502754   80857 cri.go:89] found id: ""
	I0717 18:41:29.502780   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.502787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:29.502793   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:29.502842   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:29.534205   80857 cri.go:89] found id: ""
	I0717 18:41:29.534232   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.534239   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:29.534247   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:29.534259   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:29.585406   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:29.585438   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:29.600629   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:29.600660   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:29.719788   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:29.719807   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:29.719819   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:29.785626   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:29.785662   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:29.669918   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:31.670544   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:29.325013   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:31.828532   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:30.097685   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:32.596760   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:32.325522   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:32.338046   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:32.338120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:32.370073   80857 cri.go:89] found id: ""
	I0717 18:41:32.370099   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.370106   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:32.370112   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:32.370165   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:32.408764   80857 cri.go:89] found id: ""
	I0717 18:41:32.408789   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.408799   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:32.408806   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:32.408862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:32.449078   80857 cri.go:89] found id: ""
	I0717 18:41:32.449108   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.449118   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:32.449125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:32.449176   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:32.481990   80857 cri.go:89] found id: ""
	I0717 18:41:32.482015   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.482022   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:32.482028   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:32.482077   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:32.521902   80857 cri.go:89] found id: ""
	I0717 18:41:32.521932   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.521942   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:32.521949   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:32.521997   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:32.554148   80857 cri.go:89] found id: ""
	I0717 18:41:32.554177   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.554206   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:32.554216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:32.554270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:32.587342   80857 cri.go:89] found id: ""
	I0717 18:41:32.587366   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.587374   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:32.587379   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:32.587425   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:32.619227   80857 cri.go:89] found id: ""
	I0717 18:41:32.619259   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.619270   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:32.619281   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:32.619296   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:32.669085   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:32.669124   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:32.682464   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:32.682500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:32.749218   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:32.749234   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:32.749245   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:32.814510   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:32.814545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:33.670578   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.670952   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:37.671373   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:34.324458   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:36.823615   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:38.825194   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.096041   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:37.096436   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:39.096906   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.362866   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:35.375563   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:35.375643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:35.412355   80857 cri.go:89] found id: ""
	I0717 18:41:35.412380   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.412388   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:35.412393   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:35.412439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:35.446596   80857 cri.go:89] found id: ""
	I0717 18:41:35.446621   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.446629   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:35.446634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:35.446691   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:35.481695   80857 cri.go:89] found id: ""
	I0717 18:41:35.481717   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.481725   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:35.481730   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:35.481783   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:35.514528   80857 cri.go:89] found id: ""
	I0717 18:41:35.514573   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.514584   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:35.514592   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:35.514657   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:35.547831   80857 cri.go:89] found id: ""
	I0717 18:41:35.547858   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.547871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:35.547879   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:35.547941   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:35.579059   80857 cri.go:89] found id: ""
	I0717 18:41:35.579084   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.579097   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:35.579104   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:35.579164   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:35.616442   80857 cri.go:89] found id: ""
	I0717 18:41:35.616480   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.616487   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:35.616492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:35.616545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:35.647535   80857 cri.go:89] found id: ""
	I0717 18:41:35.647564   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.647571   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:35.647579   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:35.647595   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:35.696664   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:35.696692   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:35.710474   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:35.710499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:35.785569   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:35.785595   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:35.785611   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:35.865750   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:35.865785   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:38.405391   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:38.417737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:38.417806   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:38.453848   80857 cri.go:89] found id: ""
	I0717 18:41:38.453877   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.453888   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:38.453895   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:38.453949   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:38.487083   80857 cri.go:89] found id: ""
	I0717 18:41:38.487112   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.487122   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:38.487129   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:38.487190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:38.517700   80857 cri.go:89] found id: ""
	I0717 18:41:38.517729   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.517738   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:38.517746   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:38.517808   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:38.547587   80857 cri.go:89] found id: ""
	I0717 18:41:38.547616   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.547625   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:38.547632   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:38.547780   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:38.581511   80857 cri.go:89] found id: ""
	I0717 18:41:38.581535   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.581542   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:38.581548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:38.581675   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:38.618308   80857 cri.go:89] found id: ""
	I0717 18:41:38.618327   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.618334   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:38.618340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:38.618401   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:38.658237   80857 cri.go:89] found id: ""
	I0717 18:41:38.658267   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.658278   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:38.658298   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:38.658359   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:38.694044   80857 cri.go:89] found id: ""
	I0717 18:41:38.694071   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.694080   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:38.694090   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:38.694106   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:38.746621   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:38.746658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:38.758781   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:38.758805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:38.827327   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:38.827345   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:38.827357   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:38.899731   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:38.899762   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:40.170106   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:42.170391   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:40.825940   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:43.327489   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:41.097668   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:43.597625   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:41.437479   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:41.451264   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:41.451336   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:41.489053   80857 cri.go:89] found id: ""
	I0717 18:41:41.489083   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.489093   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:41.489101   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:41.489162   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:41.521954   80857 cri.go:89] found id: ""
	I0717 18:41:41.521985   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.521996   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:41.522003   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:41.522068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:41.556847   80857 cri.go:89] found id: ""
	I0717 18:41:41.556875   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.556884   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:41.556893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:41.557024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:41.591232   80857 cri.go:89] found id: ""
	I0717 18:41:41.591255   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.591263   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:41.591269   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:41.591315   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:41.624533   80857 cri.go:89] found id: ""
	I0717 18:41:41.624565   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.624576   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:41.624583   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:41.624644   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:41.656033   80857 cri.go:89] found id: ""
	I0717 18:41:41.656063   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.656073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:41.656080   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:41.656140   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:41.691686   80857 cri.go:89] found id: ""
	I0717 18:41:41.691715   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.691725   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:41.691732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:41.691789   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:41.724688   80857 cri.go:89] found id: ""
	I0717 18:41:41.724718   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.724729   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:41.724741   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:41.724760   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:41.802855   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:41.802882   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:41.839242   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:41.839271   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:41.889028   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:41.889058   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:41.901598   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:41.901627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:41.972632   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.472824   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:44.487673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:44.487745   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:44.530173   80857 cri.go:89] found id: ""
	I0717 18:41:44.530204   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.530216   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:44.530224   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:44.530288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:44.577865   80857 cri.go:89] found id: ""
	I0717 18:41:44.577891   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.577899   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:44.577905   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:44.577967   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:44.621528   80857 cri.go:89] found id: ""
	I0717 18:41:44.621551   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.621559   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:44.621564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:44.621622   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:44.655456   80857 cri.go:89] found id: ""
	I0717 18:41:44.655488   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.655498   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:44.655505   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:44.655570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:44.688729   80857 cri.go:89] found id: ""
	I0717 18:41:44.688757   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.688767   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:44.688774   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:44.688832   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:44.720190   80857 cri.go:89] found id: ""
	I0717 18:41:44.720220   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.720231   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:44.720238   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:44.720294   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:44.750109   80857 cri.go:89] found id: ""
	I0717 18:41:44.750135   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.750142   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:44.750147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:44.750203   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:44.780039   80857 cri.go:89] found id: ""
	I0717 18:41:44.780066   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.780090   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:44.780098   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:44.780111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:44.829641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:44.829675   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:44.842587   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:44.842616   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:44.906331   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.906355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:44.906369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:44.983364   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:44.983400   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:44.671557   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:47.170565   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:45.827780   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:48.324627   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:46.096988   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:48.596469   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:47.525057   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:47.538586   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:47.538639   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:47.574805   80857 cri.go:89] found id: ""
	I0717 18:41:47.574832   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.574843   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:47.574849   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:47.574906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:47.609576   80857 cri.go:89] found id: ""
	I0717 18:41:47.609603   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.609611   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:47.609617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:47.609662   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:47.643899   80857 cri.go:89] found id: ""
	I0717 18:41:47.643927   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.643936   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:47.643941   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:47.643990   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:47.680365   80857 cri.go:89] found id: ""
	I0717 18:41:47.680404   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.680412   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:47.680418   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:47.680475   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:47.719038   80857 cri.go:89] found id: ""
	I0717 18:41:47.719061   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.719069   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:47.719074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:47.719118   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:47.751708   80857 cri.go:89] found id: ""
	I0717 18:41:47.751735   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.751744   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:47.751750   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:47.751807   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:47.789803   80857 cri.go:89] found id: ""
	I0717 18:41:47.789838   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.789850   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:47.789858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:47.789921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:47.821450   80857 cri.go:89] found id: ""
	I0717 18:41:47.821477   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.821487   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:47.821496   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:47.821511   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:47.886501   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:47.886526   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:47.886544   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:47.960142   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:47.960177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:47.995012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:47.995046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:48.046848   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:48.046884   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:49.670208   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:52.169471   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.824876   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:53.324628   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.597215   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:53.096114   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.560990   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:50.574906   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:50.575051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:50.607647   80857 cri.go:89] found id: ""
	I0717 18:41:50.607674   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.607687   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:50.607696   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:50.607756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:50.640621   80857 cri.go:89] found id: ""
	I0717 18:41:50.640651   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.640660   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:50.640667   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:50.640741   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:50.675269   80857 cri.go:89] found id: ""
	I0717 18:41:50.675293   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.675303   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:50.675313   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:50.675369   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:50.707915   80857 cri.go:89] found id: ""
	I0717 18:41:50.707938   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.707946   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:50.707951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:50.708006   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:50.741149   80857 cri.go:89] found id: ""
	I0717 18:41:50.741170   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.741178   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:50.741184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:50.741288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:50.772768   80857 cri.go:89] found id: ""
	I0717 18:41:50.772792   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.772799   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:50.772804   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:50.772854   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:50.804996   80857 cri.go:89] found id: ""
	I0717 18:41:50.805018   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.805028   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:50.805035   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:50.805094   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:50.838933   80857 cri.go:89] found id: ""
	I0717 18:41:50.838960   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.838971   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:50.838982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:50.838997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:50.886415   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:50.886444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:50.899024   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:50.899049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:50.965388   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:50.965416   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:50.965434   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:51.044449   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:51.044490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.580749   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:53.593759   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:53.593841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:53.626541   80857 cri.go:89] found id: ""
	I0717 18:41:53.626573   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.626582   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:53.626588   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:53.626645   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:53.658492   80857 cri.go:89] found id: ""
	I0717 18:41:53.658520   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.658529   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:53.658537   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:53.658600   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:53.694546   80857 cri.go:89] found id: ""
	I0717 18:41:53.694582   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.694590   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:53.694595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:53.694650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:53.727028   80857 cri.go:89] found id: ""
	I0717 18:41:53.727053   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.727061   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:53.727067   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:53.727129   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:53.762869   80857 cri.go:89] found id: ""
	I0717 18:41:53.762897   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.762906   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:53.762913   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:53.762976   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:53.794133   80857 cri.go:89] found id: ""
	I0717 18:41:53.794158   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.794166   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:53.794172   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:53.794225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:53.828432   80857 cri.go:89] found id: ""
	I0717 18:41:53.828463   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.828473   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:53.828484   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:53.828546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:53.863316   80857 cri.go:89] found id: ""
	I0717 18:41:53.863345   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.863353   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:53.863362   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:53.863384   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.897353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:53.897380   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:53.944213   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:53.944242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:53.957484   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:53.957509   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:54.025962   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:54.025992   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:54.026006   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:54.170642   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:56.672407   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:55.325017   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:57.823877   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:55.596492   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:58.096397   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:56.609502   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:56.621849   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:56.621913   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:56.657469   80857 cri.go:89] found id: ""
	I0717 18:41:56.657498   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.657510   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:56.657517   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:56.657579   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:56.691298   80857 cri.go:89] found id: ""
	I0717 18:41:56.691320   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.691327   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:56.691332   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:56.691386   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:56.723305   80857 cri.go:89] found id: ""
	I0717 18:41:56.723334   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.723344   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:56.723352   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:56.723417   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:56.755893   80857 cri.go:89] found id: ""
	I0717 18:41:56.755918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.755926   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:56.755931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:56.755982   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:56.787777   80857 cri.go:89] found id: ""
	I0717 18:41:56.787807   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.787819   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:56.787828   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:56.787894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:56.821126   80857 cri.go:89] found id: ""
	I0717 18:41:56.821152   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.821163   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:56.821170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:56.821228   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:56.855894   80857 cri.go:89] found id: ""
	I0717 18:41:56.855918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.855926   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:56.855931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:56.855980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:56.893483   80857 cri.go:89] found id: ""
	I0717 18:41:56.893505   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.893512   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:56.893521   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:56.893532   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:56.945355   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:56.945385   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:56.958426   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:56.958451   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:57.025542   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:57.025571   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:57.025585   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:57.100497   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:57.100528   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:59.636400   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:59.648517   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:59.648571   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:59.683954   80857 cri.go:89] found id: ""
	I0717 18:41:59.683978   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.683988   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:59.683995   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:59.684065   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:59.719135   80857 cri.go:89] found id: ""
	I0717 18:41:59.719162   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.719172   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:59.719179   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:59.719243   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:59.755980   80857 cri.go:89] found id: ""
	I0717 18:41:59.756012   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.756023   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:59.756030   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:59.756091   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:59.788147   80857 cri.go:89] found id: ""
	I0717 18:41:59.788176   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.788185   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:59.788191   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:59.788239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:59.819646   80857 cri.go:89] found id: ""
	I0717 18:41:59.819670   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.819679   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:59.819685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:59.819738   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:59.852487   80857 cri.go:89] found id: ""
	I0717 18:41:59.852508   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.852516   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:59.852521   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:59.852586   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:59.883761   80857 cri.go:89] found id: ""
	I0717 18:41:59.883794   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.883805   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:59.883812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:59.883870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:59.914854   80857 cri.go:89] found id: ""
	I0717 18:41:59.914882   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.914889   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:59.914896   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:59.914909   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:59.995619   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:59.995650   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:00.034444   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:00.034472   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:59.172253   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:01.670422   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:59.824347   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:01.824444   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:03.826580   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:00.096457   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:02.596587   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:00.084278   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:00.084308   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:00.097771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:00.097796   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:00.161753   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:02.662134   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:02.676200   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:02.676277   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:02.711606   80857 cri.go:89] found id: ""
	I0717 18:42:02.711640   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.711652   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:02.711659   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:02.711711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:02.744704   80857 cri.go:89] found id: ""
	I0717 18:42:02.744728   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.744735   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:02.744741   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:02.744800   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:02.778815   80857 cri.go:89] found id: ""
	I0717 18:42:02.778846   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.778859   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:02.778868   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:02.778936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:02.810896   80857 cri.go:89] found id: ""
	I0717 18:42:02.810928   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.810941   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:02.810950   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:02.811024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:02.843868   80857 cri.go:89] found id: ""
	I0717 18:42:02.843892   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.843903   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:02.843910   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:02.843972   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:02.876311   80857 cri.go:89] found id: ""
	I0717 18:42:02.876338   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.876348   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:02.876356   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:02.876420   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:02.910752   80857 cri.go:89] found id: ""
	I0717 18:42:02.910776   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.910784   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:02.910789   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:02.910835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:02.947286   80857 cri.go:89] found id: ""
	I0717 18:42:02.947318   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.947328   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:02.947337   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:02.947351   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:02.999512   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:02.999542   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:03.014063   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:03.014094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:03.081822   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:03.081844   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:03.081858   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:03.161088   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:03.161117   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:04.171168   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:06.669508   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:06.324608   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:08.825084   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:04.597129   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:07.098716   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:05.699198   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:05.711597   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:05.711654   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:05.749653   80857 cri.go:89] found id: ""
	I0717 18:42:05.749684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.749694   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:05.749703   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:05.749757   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:05.785095   80857 cri.go:89] found id: ""
	I0717 18:42:05.785118   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.785125   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:05.785134   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:05.785179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:05.818085   80857 cri.go:89] found id: ""
	I0717 18:42:05.818111   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.818119   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:05.818125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:05.818171   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:05.851872   80857 cri.go:89] found id: ""
	I0717 18:42:05.851895   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.851902   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:05.851907   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:05.851958   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:05.883924   80857 cri.go:89] found id: ""
	I0717 18:42:05.883948   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.883958   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:05.883965   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:05.884025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:05.916365   80857 cri.go:89] found id: ""
	I0717 18:42:05.916396   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.916407   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:05.916414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:05.916473   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:05.950656   80857 cri.go:89] found id: ""
	I0717 18:42:05.950684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.950695   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:05.950701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:05.950762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:05.992132   80857 cri.go:89] found id: ""
	I0717 18:42:05.992160   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.992169   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:05.992177   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:05.992190   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:06.042162   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:06.042192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:06.055594   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:06.055619   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:06.123007   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:06.123038   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:06.123068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:06.200429   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:06.200460   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:08.739039   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:08.751520   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:08.751575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:08.783765   80857 cri.go:89] found id: ""
	I0717 18:42:08.783794   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.783805   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:08.783812   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:08.783864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:08.815200   80857 cri.go:89] found id: ""
	I0717 18:42:08.815227   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.815236   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:08.815242   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:08.815289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:08.848970   80857 cri.go:89] found id: ""
	I0717 18:42:08.849002   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.849012   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:08.849021   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:08.849084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:08.881832   80857 cri.go:89] found id: ""
	I0717 18:42:08.881859   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.881866   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:08.881874   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:08.881922   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:08.913119   80857 cri.go:89] found id: ""
	I0717 18:42:08.913142   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.913149   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:08.913155   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:08.913201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:08.947471   80857 cri.go:89] found id: ""
	I0717 18:42:08.947499   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.947509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:08.947515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:08.947570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:08.979570   80857 cri.go:89] found id: ""
	I0717 18:42:08.979599   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.979609   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:08.979615   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:08.979670   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:09.012960   80857 cri.go:89] found id: ""
	I0717 18:42:09.012991   80857 logs.go:276] 0 containers: []
	W0717 18:42:09.013002   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:09.013012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:09.013027   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:09.065732   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:09.065769   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:09.079572   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:09.079602   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:09.151737   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:09.151754   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:09.151766   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:09.230185   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:09.230218   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:08.670185   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:10.671336   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.325340   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:13.824087   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:09.595757   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.596784   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:14.096765   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.767189   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:11.780044   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:11.780115   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:11.812700   80857 cri.go:89] found id: ""
	I0717 18:42:11.812722   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.812730   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:11.812736   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:11.812781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:11.846855   80857 cri.go:89] found id: ""
	I0717 18:42:11.846883   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.846893   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:11.846900   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:11.846962   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:11.877671   80857 cri.go:89] found id: ""
	I0717 18:42:11.877700   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.877710   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:11.877716   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:11.877767   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:11.908703   80857 cri.go:89] found id: ""
	I0717 18:42:11.908728   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.908735   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:11.908740   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:11.908786   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:11.942191   80857 cri.go:89] found id: ""
	I0717 18:42:11.942218   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.942225   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:11.942231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:11.942284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:11.974751   80857 cri.go:89] found id: ""
	I0717 18:42:11.974782   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.974798   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:11.974807   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:11.974876   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:12.006287   80857 cri.go:89] found id: ""
	I0717 18:42:12.006317   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.006327   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:12.006335   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:12.006396   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:12.036524   80857 cri.go:89] found id: ""
	I0717 18:42:12.036546   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.036554   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:12.036575   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:12.036599   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:12.085073   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:12.085109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:12.098908   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:12.098937   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:12.161665   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:12.161687   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:12.161702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:12.240349   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:12.240401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:14.781101   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:14.794081   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:14.794149   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:14.828975   80857 cri.go:89] found id: ""
	I0717 18:42:14.829003   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.829013   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:14.829021   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:14.829072   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:14.864858   80857 cri.go:89] found id: ""
	I0717 18:42:14.864886   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.864896   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:14.864903   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:14.864986   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:14.897961   80857 cri.go:89] found id: ""
	I0717 18:42:14.897983   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.897991   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:14.897996   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:14.898041   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:14.935499   80857 cri.go:89] found id: ""
	I0717 18:42:14.935521   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.935529   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:14.935534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:14.935591   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:14.967581   80857 cri.go:89] found id: ""
	I0717 18:42:14.967605   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.967621   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:14.967629   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:14.967688   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:15.001844   80857 cri.go:89] found id: ""
	I0717 18:42:15.001876   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.001888   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:15.001894   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:15.001942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:15.038940   80857 cri.go:89] found id: ""
	I0717 18:42:15.038967   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.038977   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:15.038985   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:15.039043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:13.170111   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:15.669712   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:17.669916   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:16.325511   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:18.823820   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:16.597587   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:19.096905   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:15.072636   80857 cri.go:89] found id: ""
	I0717 18:42:15.072665   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.072677   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:15.072688   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:15.072703   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:15.124889   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:15.124934   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:15.138661   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:15.138691   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:15.208762   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:15.208791   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:15.208806   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:15.281302   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:15.281336   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:17.817136   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:17.831013   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:17.831078   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:17.867065   80857 cri.go:89] found id: ""
	I0717 18:42:17.867091   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.867101   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:17.867108   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:17.867166   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:17.904143   80857 cri.go:89] found id: ""
	I0717 18:42:17.904171   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.904180   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:17.904188   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:17.904248   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:17.937450   80857 cri.go:89] found id: ""
	I0717 18:42:17.937478   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.937487   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:17.937492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:17.937556   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:17.970650   80857 cri.go:89] found id: ""
	I0717 18:42:17.970679   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.970689   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:17.970696   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:17.970754   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:18.002329   80857 cri.go:89] found id: ""
	I0717 18:42:18.002355   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.002364   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:18.002371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:18.002430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:18.035253   80857 cri.go:89] found id: ""
	I0717 18:42:18.035278   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.035288   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:18.035295   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:18.035356   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:18.070386   80857 cri.go:89] found id: ""
	I0717 18:42:18.070419   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.070431   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:18.070439   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:18.070507   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:18.106148   80857 cri.go:89] found id: ""
	I0717 18:42:18.106170   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.106177   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:18.106185   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:18.106201   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:18.157359   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:18.157390   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:18.171757   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:18.171782   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:18.242795   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:18.242818   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:18.242831   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:18.316221   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:18.316255   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:19.670562   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:22.171111   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:20.824266   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:22.824366   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:21.596773   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:24.098051   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:20.857953   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:20.870813   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:20.870882   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:20.906033   80857 cri.go:89] found id: ""
	I0717 18:42:20.906065   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.906075   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:20.906083   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:20.906142   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:20.942292   80857 cri.go:89] found id: ""
	I0717 18:42:20.942316   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.942335   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:20.942342   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:20.942403   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:20.985113   80857 cri.go:89] found id: ""
	I0717 18:42:20.985143   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.985151   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:20.985157   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:20.985217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:21.021807   80857 cri.go:89] found id: ""
	I0717 18:42:21.021834   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.021842   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:21.021847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:21.021906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:21.061924   80857 cri.go:89] found id: ""
	I0717 18:42:21.061949   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.061961   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:21.061969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:21.062025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:21.098890   80857 cri.go:89] found id: ""
	I0717 18:42:21.098916   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.098927   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:21.098935   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:21.098991   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:21.132576   80857 cri.go:89] found id: ""
	I0717 18:42:21.132612   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.132621   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:21.132627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:21.132687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:21.167723   80857 cri.go:89] found id: ""
	I0717 18:42:21.167765   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.167778   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:21.167788   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:21.167803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:21.220427   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:21.220461   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:21.233191   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:21.233216   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:21.304462   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:21.304481   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:21.304498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:21.386887   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:21.386925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:23.926518   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:23.940470   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:23.940534   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:23.976739   80857 cri.go:89] found id: ""
	I0717 18:42:23.976763   80857 logs.go:276] 0 containers: []
	W0717 18:42:23.976773   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:23.976778   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:23.976838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:24.007575   80857 cri.go:89] found id: ""
	I0717 18:42:24.007603   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.007612   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:24.007617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:24.007671   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:24.040430   80857 cri.go:89] found id: ""
	I0717 18:42:24.040455   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.040463   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:24.040468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:24.040581   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:24.071602   80857 cri.go:89] found id: ""
	I0717 18:42:24.071629   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.071638   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:24.071644   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:24.071705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:24.109570   80857 cri.go:89] found id: ""
	I0717 18:42:24.109595   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.109602   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:24.109607   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:24.109667   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:24.144284   80857 cri.go:89] found id: ""
	I0717 18:42:24.144305   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.144328   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:24.144333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:24.144382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:24.179441   80857 cri.go:89] found id: ""
	I0717 18:42:24.179467   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.179474   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:24.179479   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:24.179545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:24.222100   80857 cri.go:89] found id: ""
	I0717 18:42:24.222133   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.222143   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:24.222159   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:24.222175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:24.273181   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:24.273215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:24.285835   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:24.285861   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:24.357804   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:24.357826   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:24.357839   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:24.437270   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:24.437310   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:24.670033   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.671014   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:24.824543   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:27.325296   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.597795   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:29.098055   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.979543   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:26.992443   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:26.992497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:27.025520   80857 cri.go:89] found id: ""
	I0717 18:42:27.025548   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.025560   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:27.025567   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:27.025630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:27.059971   80857 cri.go:89] found id: ""
	I0717 18:42:27.060002   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.060011   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:27.060016   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:27.060068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:27.091370   80857 cri.go:89] found id: ""
	I0717 18:42:27.091397   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.091407   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:27.091415   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:27.091468   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:27.123736   80857 cri.go:89] found id: ""
	I0717 18:42:27.123768   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.123779   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:27.123786   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:27.123849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:27.156155   80857 cri.go:89] found id: ""
	I0717 18:42:27.156177   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.156185   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:27.156190   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:27.156239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:27.190701   80857 cri.go:89] found id: ""
	I0717 18:42:27.190729   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.190741   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:27.190749   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:27.190825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:27.222093   80857 cri.go:89] found id: ""
	I0717 18:42:27.222119   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.222130   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:27.222137   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:27.222199   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:27.258789   80857 cri.go:89] found id: ""
	I0717 18:42:27.258813   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.258824   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:27.258834   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:27.258848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:27.307033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:27.307068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:27.321181   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:27.321209   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:27.390560   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:27.390593   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:27.390613   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:27.464352   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:27.464389   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:30.005732   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:30.019088   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:30.019160   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:29.170578   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.670221   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:29.327610   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.824292   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:33.824392   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.595937   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:33.597622   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:30.052733   80857 cri.go:89] found id: ""
	I0717 18:42:30.052757   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.052765   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:30.052775   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:30.052836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:30.087683   80857 cri.go:89] found id: ""
	I0717 18:42:30.087711   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.087722   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:30.087729   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:30.087774   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:30.124371   80857 cri.go:89] found id: ""
	I0717 18:42:30.124404   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.124416   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:30.124432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:30.124487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:30.160081   80857 cri.go:89] found id: ""
	I0717 18:42:30.160107   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.160115   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:30.160122   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:30.160173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:30.194420   80857 cri.go:89] found id: ""
	I0717 18:42:30.194447   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.194456   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:30.194464   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:30.194522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:30.229544   80857 cri.go:89] found id: ""
	I0717 18:42:30.229570   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.229584   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:30.229591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:30.229650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:30.264164   80857 cri.go:89] found id: ""
	I0717 18:42:30.264193   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.264204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:30.264211   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:30.264266   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:30.296958   80857 cri.go:89] found id: ""
	I0717 18:42:30.296986   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.296996   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:30.297008   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:30.297049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:30.348116   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:30.348145   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:30.361373   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:30.361401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:30.429601   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:30.429620   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:30.429634   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:30.507718   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:30.507752   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:33.045539   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:33.058149   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:33.058219   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:33.088675   80857 cri.go:89] found id: ""
	I0717 18:42:33.088702   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.088710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:33.088717   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:33.088773   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:33.121269   80857 cri.go:89] found id: ""
	I0717 18:42:33.121297   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.121308   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:33.121315   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:33.121375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:33.156144   80857 cri.go:89] found id: ""
	I0717 18:42:33.156173   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.156184   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:33.156192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:33.156257   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:33.188559   80857 cri.go:89] found id: ""
	I0717 18:42:33.188585   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.188597   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:33.188603   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:33.188651   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:33.219650   80857 cri.go:89] found id: ""
	I0717 18:42:33.219672   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.219680   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:33.219686   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:33.219746   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:33.249704   80857 cri.go:89] found id: ""
	I0717 18:42:33.249728   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.249737   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:33.249742   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:33.249793   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:33.283480   80857 cri.go:89] found id: ""
	I0717 18:42:33.283503   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.283511   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:33.283516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:33.283560   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:33.314577   80857 cri.go:89] found id: ""
	I0717 18:42:33.314620   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.314629   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:33.314638   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:33.314649   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:33.363458   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:33.363491   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:33.377240   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:33.377267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:33.442939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:33.442961   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:33.442976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:33.522422   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:33.522456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:34.170638   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.171034   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.324780   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:38.824832   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.097788   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:38.596054   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.063823   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:36.078272   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:36.078342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:36.111460   80857 cri.go:89] found id: ""
	I0717 18:42:36.111494   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.111502   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:36.111509   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:36.111562   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:36.144191   80857 cri.go:89] found id: ""
	I0717 18:42:36.144222   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.144232   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:36.144239   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:36.144306   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:36.177247   80857 cri.go:89] found id: ""
	I0717 18:42:36.177277   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.177288   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:36.177294   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:36.177350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:36.213390   80857 cri.go:89] found id: ""
	I0717 18:42:36.213419   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.213427   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:36.213433   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:36.213493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:36.246775   80857 cri.go:89] found id: ""
	I0717 18:42:36.246799   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.246807   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:36.246812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:36.246870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:36.282441   80857 cri.go:89] found id: ""
	I0717 18:42:36.282463   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.282470   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:36.282476   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:36.282529   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:36.314178   80857 cri.go:89] found id: ""
	I0717 18:42:36.314203   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.314211   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:36.314216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:36.314265   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:36.353705   80857 cri.go:89] found id: ""
	I0717 18:42:36.353730   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.353737   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:36.353746   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:36.353758   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:36.370866   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:36.370894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:36.463660   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:36.463693   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:36.463710   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:36.540337   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:36.540371   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:36.575770   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:36.575801   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.128675   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:39.141187   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:39.141255   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:39.175960   80857 cri.go:89] found id: ""
	I0717 18:42:39.175982   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.175989   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:39.175994   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:39.176051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:39.209442   80857 cri.go:89] found id: ""
	I0717 18:42:39.209472   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.209483   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:39.209490   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:39.209552   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:39.243225   80857 cri.go:89] found id: ""
	I0717 18:42:39.243249   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.243256   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:39.243262   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:39.243309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:39.277369   80857 cri.go:89] found id: ""
	I0717 18:42:39.277396   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.277407   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:39.277414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:39.277464   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:39.310522   80857 cri.go:89] found id: ""
	I0717 18:42:39.310552   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.310563   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:39.310570   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:39.310637   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:39.344186   80857 cri.go:89] found id: ""
	I0717 18:42:39.344208   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.344216   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:39.344221   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:39.344279   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:39.375329   80857 cri.go:89] found id: ""
	I0717 18:42:39.375354   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.375366   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:39.375372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:39.375419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:39.412629   80857 cri.go:89] found id: ""
	I0717 18:42:39.412659   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.412668   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:39.412679   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:39.412696   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:39.447607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:39.447644   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.498981   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:39.499013   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:39.512380   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:39.512409   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:39.580396   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:39.580415   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:39.580428   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:38.670213   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:41.170284   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:40.825257   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:43.324155   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:40.596267   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:42.597199   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:42.158145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:42.177450   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:42.177522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:42.222849   80857 cri.go:89] found id: ""
	I0717 18:42:42.222880   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.222890   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:42.222897   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:42.222954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:42.252712   80857 cri.go:89] found id: ""
	I0717 18:42:42.252742   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.252752   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:42.252757   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:42.252802   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:42.283764   80857 cri.go:89] found id: ""
	I0717 18:42:42.283789   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.283799   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:42.283806   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:42.283864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:42.317243   80857 cri.go:89] found id: ""
	I0717 18:42:42.317270   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.317281   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:42.317288   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:42.317350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:42.349972   80857 cri.go:89] found id: ""
	I0717 18:42:42.350000   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.350010   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:42.350017   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:42.350074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:42.382111   80857 cri.go:89] found id: ""
	I0717 18:42:42.382146   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.382158   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:42.382165   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:42.382223   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:42.414669   80857 cri.go:89] found id: ""
	I0717 18:42:42.414692   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.414700   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:42.414705   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:42.414765   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:42.446533   80857 cri.go:89] found id: ""
	I0717 18:42:42.446571   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.446579   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:42.446588   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:42.446603   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:42.522142   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:42.522165   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:42.522177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:42.602456   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:42.602493   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:42.642192   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:42.642221   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:42.695016   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:42.695046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:43.170955   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.670631   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.325626   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:47.824543   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.097244   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:47.097783   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.208310   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:45.221821   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:45.221901   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:45.256887   80857 cri.go:89] found id: ""
	I0717 18:42:45.256914   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.256924   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:45.256930   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:45.256999   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:45.293713   80857 cri.go:89] found id: ""
	I0717 18:42:45.293735   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.293748   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:45.293753   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:45.293799   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:45.328790   80857 cri.go:89] found id: ""
	I0717 18:42:45.328815   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.328824   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:45.328833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:45.328880   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:45.364977   80857 cri.go:89] found id: ""
	I0717 18:42:45.365004   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.365014   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:45.365022   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:45.365084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:45.401131   80857 cri.go:89] found id: ""
	I0717 18:42:45.401157   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.401164   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:45.401170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:45.401217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:45.432252   80857 cri.go:89] found id: ""
	I0717 18:42:45.432279   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.432287   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:45.432293   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:45.432338   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:45.464636   80857 cri.go:89] found id: ""
	I0717 18:42:45.464659   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.464667   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:45.464674   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:45.464728   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:45.494884   80857 cri.go:89] found id: ""
	I0717 18:42:45.494913   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.494924   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:45.494935   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:45.494949   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:45.546578   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:45.546610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:45.559622   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:45.559647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:45.622094   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:45.622114   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:45.622126   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:45.699772   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:45.699814   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.241667   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:48.254205   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:48.254270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:48.293258   80857 cri.go:89] found id: ""
	I0717 18:42:48.293287   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.293298   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:48.293305   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:48.293362   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:48.328778   80857 cri.go:89] found id: ""
	I0717 18:42:48.328807   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.328818   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:48.328824   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:48.328884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:48.360230   80857 cri.go:89] found id: ""
	I0717 18:42:48.360256   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.360266   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:48.360276   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:48.360335   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:48.397770   80857 cri.go:89] found id: ""
	I0717 18:42:48.397797   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.397808   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:48.397815   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:48.397873   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:48.430912   80857 cri.go:89] found id: ""
	I0717 18:42:48.430938   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.430946   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:48.430956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:48.431015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:48.462659   80857 cri.go:89] found id: ""
	I0717 18:42:48.462688   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.462699   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:48.462706   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:48.462771   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:48.497554   80857 cri.go:89] found id: ""
	I0717 18:42:48.497584   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.497594   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:48.497601   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:48.497665   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:48.529524   80857 cri.go:89] found id: ""
	I0717 18:42:48.529547   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.529555   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:48.529564   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:48.529577   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:48.601265   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:48.601285   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:48.601297   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:48.678045   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:48.678075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.718565   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:48.718598   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:48.769923   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:48.769956   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:48.169777   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:50.669643   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.670334   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:50.324997   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.824163   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:49.596927   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.097602   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:51.282887   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:51.295778   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:51.295848   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:51.329324   80857 cri.go:89] found id: ""
	I0717 18:42:51.329351   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.329361   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:51.329369   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:51.329434   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:51.362013   80857 cri.go:89] found id: ""
	I0717 18:42:51.362042   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.362052   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:51.362059   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:51.362120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:51.395039   80857 cri.go:89] found id: ""
	I0717 18:42:51.395069   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.395080   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:51.395087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:51.395155   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:51.427683   80857 cri.go:89] found id: ""
	I0717 18:42:51.427709   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.427717   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:51.427722   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:51.427772   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:51.461683   80857 cri.go:89] found id: ""
	I0717 18:42:51.461706   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.461718   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:51.461723   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:51.461769   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:51.495780   80857 cri.go:89] found id: ""
	I0717 18:42:51.495802   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.495810   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:51.495816   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:51.495867   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:51.527541   80857 cri.go:89] found id: ""
	I0717 18:42:51.527573   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.527583   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:51.527591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:51.527648   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:51.567947   80857 cri.go:89] found id: ""
	I0717 18:42:51.567975   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.567987   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:51.567997   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:51.568014   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:51.620083   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:51.620109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:51.632823   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:51.632848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:51.705731   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:51.705753   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:51.705767   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:51.781969   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:51.782005   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.318011   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:54.331886   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:54.331942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:54.362935   80857 cri.go:89] found id: ""
	I0717 18:42:54.362962   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.362972   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:54.362979   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:54.363032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:54.396153   80857 cri.go:89] found id: ""
	I0717 18:42:54.396180   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.396191   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:54.396198   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:54.396259   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:54.433123   80857 cri.go:89] found id: ""
	I0717 18:42:54.433150   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.433160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:54.433168   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:54.433224   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:54.465034   80857 cri.go:89] found id: ""
	I0717 18:42:54.465064   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.465079   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:54.465087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:54.465200   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:54.496200   80857 cri.go:89] found id: ""
	I0717 18:42:54.496250   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.496263   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:54.496271   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:54.496332   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:54.528618   80857 cri.go:89] found id: ""
	I0717 18:42:54.528646   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.528656   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:54.528664   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:54.528724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:54.563018   80857 cri.go:89] found id: ""
	I0717 18:42:54.563042   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.563052   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:54.563059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:54.563114   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:54.595221   80857 cri.go:89] found id: ""
	I0717 18:42:54.595256   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.595266   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:54.595275   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:54.595291   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:54.608193   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:54.608220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:54.673755   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:54.673778   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:54.673793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:54.756443   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:54.756483   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.792670   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:54.792700   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:55.169224   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.169851   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:54.824614   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.324611   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:54.596824   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:56.597638   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:59.096992   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.344637   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:57.357003   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:57.357068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:57.389230   80857 cri.go:89] found id: ""
	I0717 18:42:57.389261   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.389271   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:57.389278   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:57.389372   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:57.421529   80857 cri.go:89] found id: ""
	I0717 18:42:57.421553   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.421571   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:57.421578   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:57.421642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:57.455154   80857 cri.go:89] found id: ""
	I0717 18:42:57.455186   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.455193   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:57.455199   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:57.455245   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:57.490576   80857 cri.go:89] found id: ""
	I0717 18:42:57.490608   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.490621   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:57.490630   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:57.490693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:57.523972   80857 cri.go:89] found id: ""
	I0717 18:42:57.524010   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.524023   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:57.524033   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:57.524092   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:57.558106   80857 cri.go:89] found id: ""
	I0717 18:42:57.558132   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.558140   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:57.558145   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:57.558201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:57.591009   80857 cri.go:89] found id: ""
	I0717 18:42:57.591035   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.591045   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:57.591051   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:57.591110   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:57.624564   80857 cri.go:89] found id: ""
	I0717 18:42:57.624592   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.624601   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:57.624612   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:57.624627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:57.699833   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:57.699868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:57.737029   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:57.737066   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:57.790562   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:57.790605   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:57.804935   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:57.804984   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:57.873081   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:59.170203   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.170348   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:59.325020   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.824876   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:03.825020   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.596885   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:03.597698   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:00.374166   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:00.388370   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:00.388443   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:00.421228   80857 cri.go:89] found id: ""
	I0717 18:43:00.421257   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.421268   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:00.421276   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:00.421325   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:00.451819   80857 cri.go:89] found id: ""
	I0717 18:43:00.451846   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.451856   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:00.451862   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:00.451917   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:00.482960   80857 cri.go:89] found id: ""
	I0717 18:43:00.482993   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.483004   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:00.483015   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:00.483074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:00.515860   80857 cri.go:89] found id: ""
	I0717 18:43:00.515882   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.515892   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:00.515899   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:00.515954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:00.548177   80857 cri.go:89] found id: ""
	I0717 18:43:00.548202   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.548212   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:00.548217   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:00.548275   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:00.580759   80857 cri.go:89] found id: ""
	I0717 18:43:00.580782   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.580790   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:00.580795   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:00.580847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:00.618661   80857 cri.go:89] found id: ""
	I0717 18:43:00.618683   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.618691   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:00.618699   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:00.618742   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:00.650503   80857 cri.go:89] found id: ""
	I0717 18:43:00.650528   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.650535   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:00.650544   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:00.650555   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:00.699668   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:00.699697   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:00.714086   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:00.714114   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:00.777051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:00.777087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:00.777105   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:00.859238   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:00.859274   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:03.399050   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:03.412565   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:03.412626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:03.445993   80857 cri.go:89] found id: ""
	I0717 18:43:03.446026   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.446038   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:03.446045   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:03.446101   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:03.481251   80857 cri.go:89] found id: ""
	I0717 18:43:03.481285   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.481297   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:03.481305   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:03.481371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:03.514406   80857 cri.go:89] found id: ""
	I0717 18:43:03.514433   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.514441   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:03.514447   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:03.514497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:03.546217   80857 cri.go:89] found id: ""
	I0717 18:43:03.546248   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.546258   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:03.546266   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:03.546327   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:03.577287   80857 cri.go:89] found id: ""
	I0717 18:43:03.577318   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.577333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:03.577340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:03.577394   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:03.610080   80857 cri.go:89] found id: ""
	I0717 18:43:03.610101   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.610109   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:03.610114   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:03.610159   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:03.643753   80857 cri.go:89] found id: ""
	I0717 18:43:03.643777   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.643787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:03.643792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:03.643849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:03.676290   80857 cri.go:89] found id: ""
	I0717 18:43:03.676338   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.676345   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:03.676353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:03.676364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:03.727818   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:03.727850   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:03.740752   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:03.740784   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:03.810465   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:03.810485   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:03.810499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:03.889326   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:03.889359   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:03.170473   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:05.170754   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:07.172145   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.323855   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:08.325019   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.096213   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:08.096443   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.426949   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:06.440007   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:06.440079   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:06.471689   80857 cri.go:89] found id: ""
	I0717 18:43:06.471715   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.471724   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:06.471729   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:06.471775   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:06.503818   80857 cri.go:89] found id: ""
	I0717 18:43:06.503840   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.503847   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:06.503853   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:06.503900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:06.534733   80857 cri.go:89] found id: ""
	I0717 18:43:06.534755   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.534763   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:06.534768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:06.534818   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:06.565388   80857 cri.go:89] found id: ""
	I0717 18:43:06.565414   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.565421   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:06.565431   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:06.565480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:06.597739   80857 cri.go:89] found id: ""
	I0717 18:43:06.597764   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.597775   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:06.597782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:06.597847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:06.629823   80857 cri.go:89] found id: ""
	I0717 18:43:06.629845   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.629853   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:06.629859   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:06.629921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:06.663753   80857 cri.go:89] found id: ""
	I0717 18:43:06.663779   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.663787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:06.663792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:06.663838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:06.700868   80857 cri.go:89] found id: ""
	I0717 18:43:06.700896   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.700906   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:06.700917   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:06.700932   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:06.753064   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:06.753097   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:06.765845   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:06.765868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:06.834691   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:06.834715   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:06.834729   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:06.908650   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:06.908682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:09.450804   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:09.463369   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:09.463452   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:09.506992   80857 cri.go:89] found id: ""
	I0717 18:43:09.507020   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.507028   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:09.507035   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:09.507093   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:09.543083   80857 cri.go:89] found id: ""
	I0717 18:43:09.543108   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.543116   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:09.543121   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:09.543174   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:09.576194   80857 cri.go:89] found id: ""
	I0717 18:43:09.576219   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.576226   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:09.576231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:09.576289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:09.610148   80857 cri.go:89] found id: ""
	I0717 18:43:09.610171   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.610178   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:09.610184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:09.610258   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:09.642217   80857 cri.go:89] found id: ""
	I0717 18:43:09.642246   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.642255   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:09.642263   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:09.642342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:09.678041   80857 cri.go:89] found id: ""
	I0717 18:43:09.678064   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.678073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:09.678079   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:09.678141   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:09.711162   80857 cri.go:89] found id: ""
	I0717 18:43:09.711193   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.711204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:09.711212   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:09.711272   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:09.746135   80857 cri.go:89] found id: ""
	I0717 18:43:09.746164   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.746175   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:09.746186   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:09.746197   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:09.799268   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:09.799303   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:09.811910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:09.811935   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:09.876939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:09.876982   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:09.876998   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:09.951468   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:09.951502   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:09.671086   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.170273   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:10.823628   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.824485   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:10.597216   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:13.096347   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.488926   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:12.501054   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:12.501112   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:12.532536   80857 cri.go:89] found id: ""
	I0717 18:43:12.532569   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.532577   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:12.532582   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:12.532629   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:12.565102   80857 cri.go:89] found id: ""
	I0717 18:43:12.565130   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.565141   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:12.565148   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:12.565208   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:12.600262   80857 cri.go:89] found id: ""
	I0717 18:43:12.600299   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.600309   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:12.600316   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:12.600366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:12.633950   80857 cri.go:89] found id: ""
	I0717 18:43:12.633980   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.633991   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:12.633998   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:12.634054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:12.673297   80857 cri.go:89] found id: ""
	I0717 18:43:12.673325   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.673338   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:12.673345   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:12.673406   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:12.707112   80857 cri.go:89] found id: ""
	I0717 18:43:12.707136   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.707144   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:12.707150   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:12.707206   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:12.746323   80857 cri.go:89] found id: ""
	I0717 18:43:12.746348   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.746358   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:12.746372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:12.746433   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:12.779470   80857 cri.go:89] found id: ""
	I0717 18:43:12.779496   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.779507   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:12.779518   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:12.779534   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:12.830156   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:12.830178   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:12.843707   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:12.843734   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:12.911849   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:12.911875   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:12.911891   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:12.986090   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:12.986122   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:14.170350   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:16.670284   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:14.824727   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:17.324146   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:15.096736   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:17.596689   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:15.523428   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:15.536012   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:15.536070   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:15.569179   80857 cri.go:89] found id: ""
	I0717 18:43:15.569208   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.569218   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:15.569225   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:15.569273   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:15.606727   80857 cri.go:89] found id: ""
	I0717 18:43:15.606749   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.606757   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:15.606763   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:15.606805   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:15.638842   80857 cri.go:89] found id: ""
	I0717 18:43:15.638873   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.638883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:15.638889   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:15.638939   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:15.671418   80857 cri.go:89] found id: ""
	I0717 18:43:15.671444   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.671453   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:15.671459   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:15.671517   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:15.704892   80857 cri.go:89] found id: ""
	I0717 18:43:15.704928   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.704937   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:15.704956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:15.705013   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:15.738478   80857 cri.go:89] found id: ""
	I0717 18:43:15.738502   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.738509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:15.738515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:15.738584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:15.771188   80857 cri.go:89] found id: ""
	I0717 18:43:15.771225   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.771237   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:15.771245   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:15.771303   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:15.807737   80857 cri.go:89] found id: ""
	I0717 18:43:15.807763   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.807770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:15.807779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:15.807790   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:15.861202   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:15.861234   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:15.874170   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:15.874200   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:15.938049   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:15.938073   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:15.938086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:16.025420   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:16.025456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:18.563320   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:18.575574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:18.575634   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:18.608673   80857 cri.go:89] found id: ""
	I0717 18:43:18.608700   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.608710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:18.608718   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:18.608782   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:18.641589   80857 cri.go:89] found id: ""
	I0717 18:43:18.641611   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.641618   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:18.641624   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:18.641679   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:18.672232   80857 cri.go:89] found id: ""
	I0717 18:43:18.672258   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.672268   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:18.672274   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:18.672331   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:18.706088   80857 cri.go:89] found id: ""
	I0717 18:43:18.706111   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.706118   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:18.706134   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:18.706179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:18.742475   80857 cri.go:89] found id: ""
	I0717 18:43:18.742503   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.742512   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:18.742518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:18.742575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:18.774141   80857 cri.go:89] found id: ""
	I0717 18:43:18.774169   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.774178   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:18.774183   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:18.774234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:18.806648   80857 cri.go:89] found id: ""
	I0717 18:43:18.806672   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.806679   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:18.806685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:18.806731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:18.838022   80857 cri.go:89] found id: ""
	I0717 18:43:18.838047   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.838054   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:18.838062   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:18.838076   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:18.903467   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:18.903487   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:18.903498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:18.980385   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:18.980432   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:19.020884   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:19.020914   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:19.073530   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:19.073574   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:19.169841   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.172793   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:19.824764   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.826081   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:20.095275   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:22.097120   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.587870   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:21.602130   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:21.602185   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:21.635373   80857 cri.go:89] found id: ""
	I0717 18:43:21.635401   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.635411   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:21.635418   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:21.635480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:21.667175   80857 cri.go:89] found id: ""
	I0717 18:43:21.667200   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.667209   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:21.667216   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:21.667267   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:21.705876   80857 cri.go:89] found id: ""
	I0717 18:43:21.705907   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.705918   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:21.705926   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:21.705988   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:21.753302   80857 cri.go:89] found id: ""
	I0717 18:43:21.753323   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.753330   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:21.753337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:21.753388   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:21.785363   80857 cri.go:89] found id: ""
	I0717 18:43:21.785390   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.785396   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:21.785402   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:21.785448   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:21.817517   80857 cri.go:89] found id: ""
	I0717 18:43:21.817545   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.817553   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:21.817560   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:21.817615   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:21.849451   80857 cri.go:89] found id: ""
	I0717 18:43:21.849478   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.849489   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:21.849497   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:21.849553   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:21.880032   80857 cri.go:89] found id: ""
	I0717 18:43:21.880055   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.880063   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:21.880073   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:21.880086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:21.928498   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:21.928530   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:21.941532   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:21.941565   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:22.014044   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:22.014066   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:22.014081   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:22.090789   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:22.090817   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:24.628401   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:24.643571   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:24.643642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:24.679262   80857 cri.go:89] found id: ""
	I0717 18:43:24.679288   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.679297   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:24.679303   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:24.679360   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:24.713043   80857 cri.go:89] found id: ""
	I0717 18:43:24.713073   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.713085   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:24.713092   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:24.713145   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:24.751459   80857 cri.go:89] found id: ""
	I0717 18:43:24.751496   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.751508   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:24.751518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:24.751584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:24.790793   80857 cri.go:89] found id: ""
	I0717 18:43:24.790820   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.790831   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:24.790838   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:24.790895   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:24.822909   80857 cri.go:89] found id: ""
	I0717 18:43:24.822936   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.822945   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:24.822953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:24.823016   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:24.855369   80857 cri.go:89] found id: ""
	I0717 18:43:24.855418   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.855455   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:24.855468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:24.855557   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:24.891080   80857 cri.go:89] found id: ""
	I0717 18:43:24.891110   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.891127   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:24.891133   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:24.891187   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:24.923679   80857 cri.go:89] found id: ""
	I0717 18:43:24.923812   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.923833   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:24.923847   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:24.923863   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:24.975469   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:24.975499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:24.988671   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:24.988702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 18:43:23.670616   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.171013   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:24.323858   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.324395   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:28.325125   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:24.596495   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.597134   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:29.096334   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	W0717 18:43:25.055191   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:25.055210   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:25.055223   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:25.138867   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:25.138900   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:27.678822   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:27.691422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:27.691483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:27.723979   80857 cri.go:89] found id: ""
	I0717 18:43:27.724008   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.724016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:27.724022   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:27.724067   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:27.756389   80857 cri.go:89] found id: ""
	I0717 18:43:27.756415   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.756423   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:27.756429   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:27.756476   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:27.787617   80857 cri.go:89] found id: ""
	I0717 18:43:27.787644   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.787652   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:27.787658   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:27.787705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:27.821688   80857 cri.go:89] found id: ""
	I0717 18:43:27.821716   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.821725   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:27.821732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:27.821787   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:27.855353   80857 cri.go:89] found id: ""
	I0717 18:43:27.855378   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.855386   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:27.855392   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:27.855439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:27.887885   80857 cri.go:89] found id: ""
	I0717 18:43:27.887909   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.887917   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:27.887923   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:27.887984   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:27.918797   80857 cri.go:89] found id: ""
	I0717 18:43:27.918820   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.918828   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:27.918833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:27.918884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:27.951255   80857 cri.go:89] found id: ""
	I0717 18:43:27.951283   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.951295   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:27.951306   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:27.951319   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:28.025476   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:28.025506   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:28.063994   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:28.064020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:28.117762   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:28.117805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:28.135688   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:28.135725   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:28.238770   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:28.172438   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.670703   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:32.674896   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.824443   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:33.324216   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:31.595533   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:33.597968   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.739930   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:30.754147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:30.754231   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:30.794454   80857 cri.go:89] found id: ""
	I0717 18:43:30.794479   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.794486   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:30.794491   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:30.794548   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:30.831643   80857 cri.go:89] found id: ""
	I0717 18:43:30.831666   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.831673   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:30.831678   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:30.831731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:30.863293   80857 cri.go:89] found id: ""
	I0717 18:43:30.863315   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.863323   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:30.863337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:30.863395   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:30.897830   80857 cri.go:89] found id: ""
	I0717 18:43:30.897859   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.897870   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:30.897877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:30.897929   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:30.933179   80857 cri.go:89] found id: ""
	I0717 18:43:30.933209   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.933220   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:30.933227   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:30.933289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:30.964730   80857 cri.go:89] found id: ""
	I0717 18:43:30.964759   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.964773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:30.964781   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:30.964825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:30.996330   80857 cri.go:89] found id: ""
	I0717 18:43:30.996353   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.996361   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:30.996367   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:30.996419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:31.028193   80857 cri.go:89] found id: ""
	I0717 18:43:31.028220   80857 logs.go:276] 0 containers: []
	W0717 18:43:31.028228   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:31.028237   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:31.028251   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:31.040465   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:31.040490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:31.108127   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:31.108150   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:31.108164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:31.187763   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:31.187797   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:31.224238   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:31.224266   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:33.776145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:33.790045   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:33.790108   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:33.823471   80857 cri.go:89] found id: ""
	I0717 18:43:33.823495   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.823505   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:33.823512   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:33.823568   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:33.860205   80857 cri.go:89] found id: ""
	I0717 18:43:33.860233   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.860243   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:33.860250   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:33.860298   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:33.895469   80857 cri.go:89] found id: ""
	I0717 18:43:33.895499   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.895509   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:33.895516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:33.895578   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:33.938483   80857 cri.go:89] found id: ""
	I0717 18:43:33.938517   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.938527   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:33.938534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:33.938596   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:33.973265   80857 cri.go:89] found id: ""
	I0717 18:43:33.973293   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.973303   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:33.973309   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:33.973382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:34.012669   80857 cri.go:89] found id: ""
	I0717 18:43:34.012696   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.012704   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:34.012710   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:34.012760   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:34.045522   80857 cri.go:89] found id: ""
	I0717 18:43:34.045547   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.045557   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:34.045564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:34.045636   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:34.082927   80857 cri.go:89] found id: ""
	I0717 18:43:34.082957   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.082968   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:34.082979   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:34.082993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:34.134133   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:34.134168   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:34.146814   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:34.146837   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:34.217050   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:34.217079   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:34.217094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:34.298572   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:34.298610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:35.169868   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:37.170083   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:35.324578   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:37.825006   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:36.096437   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:38.096991   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:36.838187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:36.850888   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:36.850948   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:36.883132   80857 cri.go:89] found id: ""
	I0717 18:43:36.883153   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.883160   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:36.883166   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:36.883209   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:36.918310   80857 cri.go:89] found id: ""
	I0717 18:43:36.918339   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.918348   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:36.918353   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:36.918411   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:36.949794   80857 cri.go:89] found id: ""
	I0717 18:43:36.949818   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.949825   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:36.949831   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:36.949889   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:36.980913   80857 cri.go:89] found id: ""
	I0717 18:43:36.980951   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.980962   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:36.980969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:36.981029   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:37.014295   80857 cri.go:89] found id: ""
	I0717 18:43:37.014322   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.014330   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:37.014336   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:37.014397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:37.048555   80857 cri.go:89] found id: ""
	I0717 18:43:37.048581   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.048589   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:37.048595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:37.048643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:37.080533   80857 cri.go:89] found id: ""
	I0717 18:43:37.080561   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.080571   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:37.080577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:37.080640   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:37.112919   80857 cri.go:89] found id: ""
	I0717 18:43:37.112952   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.112963   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:37.112973   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:37.112987   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:37.165012   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:37.165044   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:37.177860   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:37.177881   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:37.244776   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:37.244806   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:37.244824   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:37.322949   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:37.322976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:39.861056   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:39.884509   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:39.884592   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:39.931317   80857 cri.go:89] found id: ""
	I0717 18:43:39.931341   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.931348   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:39.931354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:39.931410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:39.971571   80857 cri.go:89] found id: ""
	I0717 18:43:39.971615   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.971626   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:39.971634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:39.971692   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:40.003851   80857 cri.go:89] found id: ""
	I0717 18:43:40.003875   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.003883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:40.003891   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:40.003942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:40.040403   80857 cri.go:89] found id: ""
	I0717 18:43:40.040430   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.040440   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:40.040445   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:40.040498   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:39.669960   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.170056   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.325792   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.824332   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.596935   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.597153   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.071893   80857 cri.go:89] found id: ""
	I0717 18:43:40.071919   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.071927   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:40.071932   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:40.071979   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:40.111020   80857 cri.go:89] found id: ""
	I0717 18:43:40.111042   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.111052   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:40.111059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:40.111117   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:40.142872   80857 cri.go:89] found id: ""
	I0717 18:43:40.142899   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.142910   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:40.142917   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:40.142975   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:40.179919   80857 cri.go:89] found id: ""
	I0717 18:43:40.179944   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.179953   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:40.179963   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:40.179980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:40.233033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:40.233075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:40.246272   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:40.246299   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:40.311988   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:40.312014   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:40.312033   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:40.395622   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:40.395658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:42.935843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:42.949893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:42.949957   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:42.982429   80857 cri.go:89] found id: ""
	I0717 18:43:42.982451   80857 logs.go:276] 0 containers: []
	W0717 18:43:42.982459   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:42.982464   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:42.982512   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:43.018637   80857 cri.go:89] found id: ""
	I0717 18:43:43.018659   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.018666   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:43.018672   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:43.018719   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:43.054274   80857 cri.go:89] found id: ""
	I0717 18:43:43.054301   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.054310   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:43.054317   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:43.054368   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:43.093382   80857 cri.go:89] found id: ""
	I0717 18:43:43.093408   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.093418   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:43.093425   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:43.093484   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:43.125830   80857 cri.go:89] found id: ""
	I0717 18:43:43.125862   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.125871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:43.125878   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:43.125936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:43.157110   80857 cri.go:89] found id: ""
	I0717 18:43:43.157138   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.157147   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:43.157154   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:43.157215   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:43.188320   80857 cri.go:89] found id: ""
	I0717 18:43:43.188342   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.188349   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:43.188354   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:43.188400   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:43.220650   80857 cri.go:89] found id: ""
	I0717 18:43:43.220679   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.220686   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:43.220695   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:43.220707   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:43.259320   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:43.259358   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:43.308308   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:43.308346   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:43.321865   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:43.321894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:43.396110   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:43.396135   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:43.396147   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:44.670206   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.169748   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.323427   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.324066   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.096564   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.105605   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.976091   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:45.988956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:45.989015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:46.022277   80857 cri.go:89] found id: ""
	I0717 18:43:46.022307   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.022318   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:46.022325   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:46.022398   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:46.057607   80857 cri.go:89] found id: ""
	I0717 18:43:46.057636   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.057646   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:46.057653   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:46.057712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:46.089275   80857 cri.go:89] found id: ""
	I0717 18:43:46.089304   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.089313   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:46.089321   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:46.089378   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:46.123686   80857 cri.go:89] found id: ""
	I0717 18:43:46.123717   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.123726   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:46.123731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:46.123784   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:46.166600   80857 cri.go:89] found id: ""
	I0717 18:43:46.166628   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.166638   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:46.166645   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:46.166704   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:46.202518   80857 cri.go:89] found id: ""
	I0717 18:43:46.202543   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.202562   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:46.202568   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:46.202612   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:46.234573   80857 cri.go:89] found id: ""
	I0717 18:43:46.234608   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.234620   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:46.234627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:46.234687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:46.265305   80857 cri.go:89] found id: ""
	I0717 18:43:46.265333   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.265343   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:46.265355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:46.265369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:46.342963   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:46.342993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:46.377170   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:46.377208   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:46.429641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:46.429673   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:46.442168   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:46.442195   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:46.516656   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.016877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:49.030308   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:49.030375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:49.062400   80857 cri.go:89] found id: ""
	I0717 18:43:49.062423   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.062430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:49.062435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:49.062486   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:49.097110   80857 cri.go:89] found id: ""
	I0717 18:43:49.097131   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.097137   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:49.097142   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:49.097190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:49.128535   80857 cri.go:89] found id: ""
	I0717 18:43:49.128558   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.128571   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:49.128577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:49.128626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:49.162505   80857 cri.go:89] found id: ""
	I0717 18:43:49.162530   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.162538   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:49.162544   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:49.162594   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:49.194912   80857 cri.go:89] found id: ""
	I0717 18:43:49.194939   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.194950   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:49.194957   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:49.195025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:49.227055   80857 cri.go:89] found id: ""
	I0717 18:43:49.227083   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.227092   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:49.227098   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:49.227147   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:49.259568   80857 cri.go:89] found id: ""
	I0717 18:43:49.259596   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.259607   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:49.259618   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:49.259673   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:49.291700   80857 cri.go:89] found id: ""
	I0717 18:43:49.291727   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.291735   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:49.291744   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:49.291755   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:49.344600   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:49.344636   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:49.357680   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:49.357705   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:49.427160   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.427180   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:49.427192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:49.504151   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:49.504182   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:49.170632   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.170953   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:49.324205   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.823181   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:53.824989   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:49.596298   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.596383   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:54.097260   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:52.041591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:52.054775   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:52.054841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:52.085858   80857 cri.go:89] found id: ""
	I0717 18:43:52.085892   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.085904   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:52.085911   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:52.085961   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:52.124100   80857 cri.go:89] found id: ""
	I0717 18:43:52.124122   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.124130   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:52.124135   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:52.124195   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:52.155056   80857 cri.go:89] found id: ""
	I0717 18:43:52.155079   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.155087   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:52.155093   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:52.155154   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:52.189318   80857 cri.go:89] found id: ""
	I0717 18:43:52.189349   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.189359   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:52.189366   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:52.189430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:52.222960   80857 cri.go:89] found id: ""
	I0717 18:43:52.222988   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.222999   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:52.223006   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:52.223071   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:52.255807   80857 cri.go:89] found id: ""
	I0717 18:43:52.255834   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.255841   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:52.255847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:52.255904   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:52.286596   80857 cri.go:89] found id: ""
	I0717 18:43:52.286628   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.286641   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:52.286648   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:52.286703   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:52.319607   80857 cri.go:89] found id: ""
	I0717 18:43:52.319632   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.319641   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:52.319652   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:52.319666   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:52.371270   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:52.371301   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:52.384771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:52.384803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:52.456408   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:52.456432   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:52.456444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:52.533724   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:52.533759   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:53.171080   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:55.669642   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:56.324311   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:58.823693   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:56.595916   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:58.597526   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:55.072554   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:55.087005   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:55.087086   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:55.123300   80857 cri.go:89] found id: ""
	I0717 18:43:55.123325   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.123331   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:55.123336   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:55.123390   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:55.158476   80857 cri.go:89] found id: ""
	I0717 18:43:55.158502   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.158509   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:55.158515   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:55.158572   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:55.198489   80857 cri.go:89] found id: ""
	I0717 18:43:55.198511   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.198518   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:55.198524   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:55.198567   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:55.230901   80857 cri.go:89] found id: ""
	I0717 18:43:55.230933   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.230943   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:55.230951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:55.231028   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:55.262303   80857 cri.go:89] found id: ""
	I0717 18:43:55.262326   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.262333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:55.262340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:55.262393   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:55.293889   80857 cri.go:89] found id: ""
	I0717 18:43:55.293916   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.293925   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:55.293930   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:55.293983   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:55.325695   80857 cri.go:89] found id: ""
	I0717 18:43:55.325720   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.325727   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:55.325737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:55.325797   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:55.360021   80857 cri.go:89] found id: ""
	I0717 18:43:55.360044   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.360052   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:55.360059   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:55.360075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:55.372088   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:55.372111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:55.442073   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:55.442101   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:55.442116   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:55.521733   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:55.521763   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:55.558914   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:55.558947   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.114001   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:58.126283   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:58.126353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:58.162769   80857 cri.go:89] found id: ""
	I0717 18:43:58.162800   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.162810   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:58.162815   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:58.162862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:58.197359   80857 cri.go:89] found id: ""
	I0717 18:43:58.197386   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.197397   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:58.197404   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:58.197465   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:58.229662   80857 cri.go:89] found id: ""
	I0717 18:43:58.229691   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.229700   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:58.229707   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:58.229766   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:58.261810   80857 cri.go:89] found id: ""
	I0717 18:43:58.261832   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.261838   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:58.261844   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:58.261900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:58.293243   80857 cri.go:89] found id: ""
	I0717 18:43:58.293271   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.293282   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:58.293290   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:58.293353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:58.325689   80857 cri.go:89] found id: ""
	I0717 18:43:58.325714   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.325724   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:58.325731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:58.325785   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:58.357381   80857 cri.go:89] found id: ""
	I0717 18:43:58.357406   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.357416   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:58.357422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:58.357483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:58.389859   80857 cri.go:89] found id: ""
	I0717 18:43:58.389888   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.389900   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:58.389910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:58.389926   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:58.458034   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:58.458058   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:58.458072   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:58.536134   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:58.536164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:58.573808   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:58.573834   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.624956   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:58.624985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:58.170810   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:00.670184   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:02.671370   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:00.824682   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:02.824874   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:01.096294   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:03.096348   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:01.138486   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:01.151547   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:01.151610   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:01.186397   80857 cri.go:89] found id: ""
	I0717 18:44:01.186422   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.186430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:01.186435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:01.186487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:01.220797   80857 cri.go:89] found id: ""
	I0717 18:44:01.220822   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.220830   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:01.220849   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:01.220894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:01.257640   80857 cri.go:89] found id: ""
	I0717 18:44:01.257666   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.257674   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:01.257680   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:01.257727   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:01.295393   80857 cri.go:89] found id: ""
	I0717 18:44:01.295418   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.295425   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:01.295432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:01.295493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:01.327242   80857 cri.go:89] found id: ""
	I0717 18:44:01.327261   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.327268   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:01.327273   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:01.327319   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:01.358559   80857 cri.go:89] found id: ""
	I0717 18:44:01.358586   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.358593   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:01.358599   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:01.358647   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:01.392301   80857 cri.go:89] found id: ""
	I0717 18:44:01.392332   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.392341   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:01.392346   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:01.392407   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:01.424422   80857 cri.go:89] found id: ""
	I0717 18:44:01.424449   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.424457   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:01.424465   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:01.424477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:01.473298   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:01.473332   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:01.487444   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:01.487471   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:01.552548   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:01.552572   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:01.552586   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:01.634203   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:01.634242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:04.175618   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:04.188071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:04.188150   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:04.222149   80857 cri.go:89] found id: ""
	I0717 18:44:04.222173   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.222180   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:04.222185   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:04.222242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:04.257174   80857 cri.go:89] found id: ""
	I0717 18:44:04.257211   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.257223   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:04.257232   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:04.257284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:04.291628   80857 cri.go:89] found id: ""
	I0717 18:44:04.291653   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.291666   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:04.291673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:04.291733   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:04.325935   80857 cri.go:89] found id: ""
	I0717 18:44:04.325964   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.325975   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:04.325982   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:04.326043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:04.356610   80857 cri.go:89] found id: ""
	I0717 18:44:04.356638   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.356648   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:04.356655   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:04.356712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:04.387728   80857 cri.go:89] found id: ""
	I0717 18:44:04.387764   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.387773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:04.387782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:04.387840   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:04.421452   80857 cri.go:89] found id: ""
	I0717 18:44:04.421479   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.421488   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:04.421495   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:04.421555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:04.453111   80857 cri.go:89] found id: ""
	I0717 18:44:04.453139   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.453150   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:04.453161   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:04.453175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:04.506185   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:04.506215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:04.523611   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:04.523638   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:04.591051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:04.591074   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:04.591091   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:04.666603   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:04.666647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:05.169836   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.170112   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:05.324886   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.325488   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:05.096545   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.598131   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.205208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:07.218182   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:07.218236   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:07.254521   80857 cri.go:89] found id: ""
	I0717 18:44:07.254554   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.254565   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:07.254571   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:07.254638   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:07.293622   80857 cri.go:89] found id: ""
	I0717 18:44:07.293650   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.293658   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:07.293663   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:07.293711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:07.331056   80857 cri.go:89] found id: ""
	I0717 18:44:07.331083   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.331091   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:07.331097   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:07.331157   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:07.368445   80857 cri.go:89] found id: ""
	I0717 18:44:07.368476   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.368484   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:07.368491   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:07.368541   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:07.405507   80857 cri.go:89] found id: ""
	I0717 18:44:07.405539   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.405550   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:07.405557   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:07.405617   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:07.444752   80857 cri.go:89] found id: ""
	I0717 18:44:07.444782   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.444792   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:07.444801   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:07.444859   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:07.486976   80857 cri.go:89] found id: ""
	I0717 18:44:07.487006   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.487016   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:07.487024   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:07.487073   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:07.522561   80857 cri.go:89] found id: ""
	I0717 18:44:07.522590   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.522599   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:07.522607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:07.522618   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:07.576350   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:07.576382   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:07.591491   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:07.591517   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:07.659860   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:07.659886   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:07.659902   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:07.743445   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:07.743478   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:09.170601   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:11.170851   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:09.824120   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:11.826838   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:10.097009   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:12.596778   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:10.284468   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:10.296549   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:10.296608   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:10.331209   80857 cri.go:89] found id: ""
	I0717 18:44:10.331236   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.331246   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:10.331252   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:10.331297   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:10.363911   80857 cri.go:89] found id: ""
	I0717 18:44:10.363941   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.363949   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:10.363954   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:10.364001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:10.395935   80857 cri.go:89] found id: ""
	I0717 18:44:10.395960   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.395970   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:10.395977   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:10.396021   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:10.428307   80857 cri.go:89] found id: ""
	I0717 18:44:10.428337   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.428344   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:10.428351   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:10.428397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:10.459615   80857 cri.go:89] found id: ""
	I0717 18:44:10.459643   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.459654   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:10.459661   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:10.459715   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:10.491593   80857 cri.go:89] found id: ""
	I0717 18:44:10.491617   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.491628   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:10.491636   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:10.491693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:10.526822   80857 cri.go:89] found id: ""
	I0717 18:44:10.526846   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.526853   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:10.526858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:10.526918   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:10.561037   80857 cri.go:89] found id: ""
	I0717 18:44:10.561066   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.561077   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:10.561087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:10.561101   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:10.643333   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:10.643364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:10.684673   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:10.684704   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:10.736191   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:10.736220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:10.748762   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:10.748793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:10.812121   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.313033   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:13.325692   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:13.325756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:13.358306   80857 cri.go:89] found id: ""
	I0717 18:44:13.358336   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.358345   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:13.358352   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:13.358410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:13.393233   80857 cri.go:89] found id: ""
	I0717 18:44:13.393264   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.393274   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:13.393282   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:13.393340   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:13.424256   80857 cri.go:89] found id: ""
	I0717 18:44:13.424287   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.424298   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:13.424305   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:13.424358   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:13.454988   80857 cri.go:89] found id: ""
	I0717 18:44:13.455010   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.455018   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:13.455023   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:13.455069   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:13.491019   80857 cri.go:89] found id: ""
	I0717 18:44:13.491046   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.491054   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:13.491060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:13.491107   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:13.523045   80857 cri.go:89] found id: ""
	I0717 18:44:13.523070   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.523079   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:13.523085   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:13.523131   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:13.555442   80857 cri.go:89] found id: ""
	I0717 18:44:13.555470   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.555483   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:13.555489   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:13.555549   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:13.588891   80857 cri.go:89] found id: ""
	I0717 18:44:13.588921   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.588931   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:13.588958   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:13.588973   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:13.663635   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.663659   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:13.663674   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:13.749098   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:13.749135   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:13.785489   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:13.785524   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:13.837098   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:13.837128   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:13.671215   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:15.671282   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:17.671466   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:14.324573   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:16.826063   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:15.095967   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:17.096403   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:19.096478   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:16.350571   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:16.364398   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:16.364470   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:16.400677   80857 cri.go:89] found id: ""
	I0717 18:44:16.400708   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.400719   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:16.400726   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:16.400781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:16.431715   80857 cri.go:89] found id: ""
	I0717 18:44:16.431743   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.431754   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:16.431760   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:16.431836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:16.465115   80857 cri.go:89] found id: ""
	I0717 18:44:16.465148   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.465160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:16.465167   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:16.465230   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:16.497906   80857 cri.go:89] found id: ""
	I0717 18:44:16.497933   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.497944   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:16.497952   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:16.498008   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:16.534066   80857 cri.go:89] found id: ""
	I0717 18:44:16.534097   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.534108   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:16.534116   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:16.534173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:16.566679   80857 cri.go:89] found id: ""
	I0717 18:44:16.566706   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.566717   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:16.566724   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:16.566781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:16.598397   80857 cri.go:89] found id: ""
	I0717 18:44:16.598416   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.598422   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:16.598427   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:16.598480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:16.629943   80857 cri.go:89] found id: ""
	I0717 18:44:16.629975   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.629998   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:16.630017   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:16.630032   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:16.706452   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:16.706489   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:16.744971   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:16.745003   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:16.796450   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:16.796477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:16.809192   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:16.809217   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:16.875699   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.376821   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:19.389921   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:19.389980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:19.423837   80857 cri.go:89] found id: ""
	I0717 18:44:19.423862   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.423870   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:19.423877   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:19.423934   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:19.468267   80857 cri.go:89] found id: ""
	I0717 18:44:19.468293   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.468305   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:19.468311   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:19.468371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:19.503286   80857 cri.go:89] found id: ""
	I0717 18:44:19.503315   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.503326   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:19.503333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:19.503391   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:19.535505   80857 cri.go:89] found id: ""
	I0717 18:44:19.535531   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.535542   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:19.535548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:19.535607   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:19.568678   80857 cri.go:89] found id: ""
	I0717 18:44:19.568704   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.568711   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:19.568717   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:19.568762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:19.604027   80857 cri.go:89] found id: ""
	I0717 18:44:19.604053   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.604064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:19.604071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:19.604127   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:19.637357   80857 cri.go:89] found id: ""
	I0717 18:44:19.637387   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.637397   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:19.637403   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:19.637450   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:19.669094   80857 cri.go:89] found id: ""
	I0717 18:44:19.669126   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.669136   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:19.669145   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:19.669160   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:19.720218   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:19.720248   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:19.733320   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:19.733343   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:19.796229   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.796252   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:19.796267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:19.871157   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:19.871186   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:20.170824   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:22.670239   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:19.324037   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:21.324408   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:23.824030   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:21.098734   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:23.595859   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:22.409012   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:22.421477   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:22.421546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:22.457314   80857 cri.go:89] found id: ""
	I0717 18:44:22.457337   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.457346   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:22.457354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:22.457410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:22.490998   80857 cri.go:89] found id: ""
	I0717 18:44:22.491022   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.491030   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:22.491037   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:22.491090   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:22.523904   80857 cri.go:89] found id: ""
	I0717 18:44:22.523934   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.523945   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:22.523953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:22.524012   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:22.555917   80857 cri.go:89] found id: ""
	I0717 18:44:22.555947   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.555956   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:22.555962   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:22.556026   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:22.588510   80857 cri.go:89] found id: ""
	I0717 18:44:22.588552   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.588565   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:22.588574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:22.588652   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:22.621854   80857 cri.go:89] found id: ""
	I0717 18:44:22.621883   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.621893   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:22.621901   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:22.621956   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:22.653897   80857 cri.go:89] found id: ""
	I0717 18:44:22.653921   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.653931   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:22.653938   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:22.654001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:22.685731   80857 cri.go:89] found id: ""
	I0717 18:44:22.685760   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.685770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:22.685779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:22.685792   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:22.735514   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:22.735545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:22.748148   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:22.748169   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:22.809637   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:22.809666   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:22.809682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:22.886014   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:22.886050   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:24.670825   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:27.169930   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.824694   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:28.324620   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.597423   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:28.095788   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.431906   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:25.444866   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:25.444965   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:25.477211   80857 cri.go:89] found id: ""
	I0717 18:44:25.477245   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.477257   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:25.477264   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:25.477366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:25.512077   80857 cri.go:89] found id: ""
	I0717 18:44:25.512108   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.512120   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:25.512127   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:25.512177   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:25.543953   80857 cri.go:89] found id: ""
	I0717 18:44:25.543974   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.543981   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:25.543987   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:25.544032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:25.574955   80857 cri.go:89] found id: ""
	I0717 18:44:25.574980   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.574990   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:25.574997   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:25.575054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:25.607078   80857 cri.go:89] found id: ""
	I0717 18:44:25.607106   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.607117   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:25.607125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:25.607188   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:25.643129   80857 cri.go:89] found id: ""
	I0717 18:44:25.643152   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.643162   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:25.643169   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:25.643225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:25.678220   80857 cri.go:89] found id: ""
	I0717 18:44:25.678241   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.678249   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:25.678254   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:25.678309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:25.715405   80857 cri.go:89] found id: ""
	I0717 18:44:25.715433   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.715446   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:25.715458   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:25.715474   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:25.772978   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:25.773008   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:25.786559   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:25.786587   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:25.853369   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:25.853386   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:25.853398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:25.954346   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:25.954398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:28.498591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:28.511701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:28.511762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:28.543527   80857 cri.go:89] found id: ""
	I0717 18:44:28.543551   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.543559   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:28.543565   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:28.543624   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:28.574737   80857 cri.go:89] found id: ""
	I0717 18:44:28.574762   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.574769   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:28.574776   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:28.574835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:28.608129   80857 cri.go:89] found id: ""
	I0717 18:44:28.608166   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.608174   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:28.608179   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:28.608234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:28.644324   80857 cri.go:89] found id: ""
	I0717 18:44:28.644348   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.644357   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:28.644371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:28.644426   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:28.675830   80857 cri.go:89] found id: ""
	I0717 18:44:28.675859   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.675870   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:28.675877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:28.675937   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:28.705713   80857 cri.go:89] found id: ""
	I0717 18:44:28.705749   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.705760   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:28.705768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:28.705821   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:28.738648   80857 cri.go:89] found id: ""
	I0717 18:44:28.738677   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.738688   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:28.738695   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:28.738752   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:28.768877   80857 cri.go:89] found id: ""
	I0717 18:44:28.768906   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.768916   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:28.768927   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:28.768953   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:28.818951   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:28.818985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:28.832813   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:28.832843   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:28.910030   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:28.910051   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:28.910063   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:28.986706   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:28.986743   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:29.170559   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:31.669543   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:30.824906   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:33.324261   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:30.096916   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:32.597522   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:31.529154   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:31.543261   80857 kubeadm.go:597] duration metric: took 4m4.346231712s to restartPrimaryControlPlane
	W0717 18:44:31.543327   80857 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:44:31.543350   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:44:33.670602   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:36.169669   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:35.325082   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:37.824371   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:35.096445   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:37.097375   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:39.098005   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:36.752008   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.208633612s)
	I0717 18:44:36.752076   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:44:36.765411   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:44:36.774556   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:44:36.783406   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:44:36.783427   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:44:36.783479   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:44:36.791953   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:44:36.792007   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:44:36.800929   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:44:36.808988   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:44:36.809049   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:44:36.817312   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.825586   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:44:36.825648   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.834783   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:44:36.843109   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:44:36.843166   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:44:36.852276   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:44:37.058251   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:44:38.170695   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.671193   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.324181   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.818959   80401 pod_ready.go:81] duration metric: took 4m0.000961975s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" ...
	E0717 18:44:40.818998   80401 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:44:40.819017   80401 pod_ready.go:38] duration metric: took 4m12.045669741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:44:40.819042   80401 kubeadm.go:597] duration metric: took 4m22.276381575s to restartPrimaryControlPlane
	W0717 18:44:40.819091   80401 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:44:40.819116   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:44:41.597013   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:44.097096   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:43.170145   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:45.670626   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:46.595570   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:48.598459   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:48.169822   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:50.170686   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:52.670255   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:51.097591   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:53.597467   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:55.170853   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:57.670157   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:56.096506   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:58.107493   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:00.170210   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:02.672286   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:00.596747   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:02.590517   81068 pod_ready.go:81] duration metric: took 4m0.000120095s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" ...
	E0717 18:45:02.590549   81068 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:45:02.590572   81068 pod_ready.go:38] duration metric: took 4m10.536894511s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:02.590607   81068 kubeadm.go:597] duration metric: took 4m18.045314131s to restartPrimaryControlPlane
	W0717 18:45:02.590672   81068 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:45:02.590702   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:45:06.920900   80401 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.10175503s)
	I0717 18:45:06.921009   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:06.952090   80401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:06.962820   80401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:06.979545   80401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:06.979577   80401 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:06.979641   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:45:06.990493   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:06.990574   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:07.014934   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:45:07.024381   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:07.024449   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:07.033573   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:45:07.042495   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:07.042552   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:07.051233   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:45:07.059616   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:07.059674   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:07.068348   80401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:07.112042   80401 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 18:45:07.112188   80401 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:45:07.229262   80401 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:45:07.229356   80401 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:45:07.229491   80401 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 18:45:07.239251   80401 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:45:05.171753   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:07.669753   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:07.241949   80401 out.go:204]   - Generating certificates and keys ...
	I0717 18:45:07.242054   80401 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:45:07.242150   80401 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:45:07.242253   80401 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:45:07.242355   80401 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:45:07.242459   80401 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:45:07.242536   80401 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:45:07.242620   80401 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:45:07.242721   80401 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:45:07.242835   80401 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:45:07.242937   80401 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:45:07.242998   80401 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:45:07.243068   80401 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:45:07.641462   80401 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:45:07.705768   80401 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:45:07.821102   80401 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:45:07.898702   80401 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:45:08.107470   80401 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:45:08.107945   80401 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:45:08.111615   80401 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:45:08.113464   80401 out.go:204]   - Booting up control plane ...
	I0717 18:45:08.113572   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:45:08.113695   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:45:08.113843   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:45:08.131411   80401 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:45:08.137563   80401 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:45:08.137622   80401 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:45:08.268403   80401 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:45:08.268519   80401 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:45:08.769158   80401 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.386396ms
	I0717 18:45:08.769265   80401 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:45:09.669968   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:11.670466   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:13.771873   80401 kubeadm.go:310] [api-check] The API server is healthy after 5.002458706s
	I0717 18:45:13.789581   80401 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:45:13.804268   80401 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:45:13.831438   80401 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:45:13.831641   80401 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-066175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:45:13.845165   80401 kubeadm.go:310] [bootstrap-token] Using token: fscs12.0o2n9pl0vxdw75m1
	I0717 18:45:13.846851   80401 out.go:204]   - Configuring RBAC rules ...
	I0717 18:45:13.847002   80401 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:45:13.854788   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:45:13.866828   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:45:13.871541   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:45:13.875508   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:45:13.880068   80401 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:45:14.179824   80401 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:45:14.669946   80401 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:45:15.180053   80401 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:45:15.180076   80401 kubeadm.go:310] 
	I0717 18:45:15.180180   80401 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:45:15.180201   80401 kubeadm.go:310] 
	I0717 18:45:15.180287   80401 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:45:15.180295   80401 kubeadm.go:310] 
	I0717 18:45:15.180348   80401 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:45:15.180437   80401 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:45:15.180517   80401 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:45:15.180530   80401 kubeadm.go:310] 
	I0717 18:45:15.180607   80401 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:45:15.180617   80401 kubeadm.go:310] 
	I0717 18:45:15.180682   80401 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:45:15.180692   80401 kubeadm.go:310] 
	I0717 18:45:15.180775   80401 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:45:15.180871   80401 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:45:15.180984   80401 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:45:15.180996   80401 kubeadm.go:310] 
	I0717 18:45:15.181107   80401 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:45:15.181221   80401 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:45:15.181234   80401 kubeadm.go:310] 
	I0717 18:45:15.181370   80401 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fscs12.0o2n9pl0vxdw75m1 \
	I0717 18:45:15.181523   80401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:45:15.181571   80401 kubeadm.go:310] 	--control-plane 
	I0717 18:45:15.181579   80401 kubeadm.go:310] 
	I0717 18:45:15.181679   80401 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:45:15.181690   80401 kubeadm.go:310] 
	I0717 18:45:15.181802   80401 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fscs12.0o2n9pl0vxdw75m1 \
	I0717 18:45:15.181954   80401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:45:15.182460   80401 kubeadm.go:310] W0717 18:45:07.084606    2905 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:45:15.182848   80401 kubeadm.go:310] W0717 18:45:07.085710    2905 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:45:15.183017   80401 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:15.183038   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:45:15.183048   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:45:15.185022   80401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:45:13.671267   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:15.671682   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:15.186444   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:45:15.197514   80401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:45:15.216000   80401 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:45:15.216097   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:15.216157   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-066175 minikube.k8s.io/updated_at=2024_07_17T18_45_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=no-preload-066175 minikube.k8s.io/primary=true
	I0717 18:45:15.251049   80401 ops.go:34] apiserver oom_adj: -16
	I0717 18:45:15.383234   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:15.884265   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:16.384075   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:16.883375   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:17.383864   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:17.884072   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:18.383283   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:18.883644   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:19.384366   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:19.507413   80401 kubeadm.go:1113] duration metric: took 4.291369352s to wait for elevateKubeSystemPrivileges
	I0717 18:45:19.507450   80401 kubeadm.go:394] duration metric: took 5m1.019320853s to StartCluster
	I0717 18:45:19.507473   80401 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:19.507570   80401 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:45:19.510004   80401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:19.510329   80401 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:45:19.510401   80401 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:45:19.510484   80401 addons.go:69] Setting storage-provisioner=true in profile "no-preload-066175"
	I0717 18:45:19.510515   80401 addons.go:234] Setting addon storage-provisioner=true in "no-preload-066175"
	W0717 18:45:19.510523   80401 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:45:19.510530   80401 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:45:19.510531   80401 addons.go:69] Setting default-storageclass=true in profile "no-preload-066175"
	I0717 18:45:19.510553   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.510551   80401 addons.go:69] Setting metrics-server=true in profile "no-preload-066175"
	I0717 18:45:19.510572   80401 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-066175"
	I0717 18:45:19.510586   80401 addons.go:234] Setting addon metrics-server=true in "no-preload-066175"
	W0717 18:45:19.510596   80401 addons.go:243] addon metrics-server should already be in state true
	I0717 18:45:19.510628   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.510986   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.510986   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.511027   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.511047   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.511075   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.511102   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.512057   80401 out.go:177] * Verifying Kubernetes components...
	I0717 18:45:19.513662   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:45:19.532038   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40719
	I0717 18:45:19.532059   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45825
	I0717 18:45:19.532048   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0717 18:45:19.532557   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.532562   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.532701   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.533086   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533107   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533246   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533261   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533276   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533295   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533455   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533671   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533732   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533851   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.533933   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.533958   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.534280   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.534310   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.537749   80401 addons.go:234] Setting addon default-storageclass=true in "no-preload-066175"
	W0717 18:45:19.537773   80401 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:45:19.537804   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.538168   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.538206   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.550488   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I0717 18:45:19.551013   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.551625   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.551647   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.552005   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.552335   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.553613   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I0717 18:45:19.553633   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0717 18:45:19.554184   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.554243   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.554271   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.554784   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.554801   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.554965   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.554986   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.555220   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.555350   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.555393   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.555995   80401 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:45:19.556103   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.556229   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.556825   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.557482   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:45:19.557499   80401 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:45:19.557517   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.558437   80401 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:45:19.560069   80401 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:19.560084   80401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:45:19.560100   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.560881   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.560908   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.560932   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.561265   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.561477   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.561633   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.561732   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.563601   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.564025   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.564197   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.564219   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.564378   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.564549   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.564686   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.579324   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37271
	I0717 18:45:19.579786   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.580331   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.580354   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.580697   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.580925   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.582700   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.582910   80401 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:19.582923   80401 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:45:19.582936   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.585938   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.586387   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.586414   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.586605   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.586758   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.586920   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.587061   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.706369   80401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:45:19.727936   80401 node_ready.go:35] waiting up to 6m0s for node "no-preload-066175" to be "Ready" ...
	I0717 18:45:19.738822   80401 node_ready.go:49] node "no-preload-066175" has status "Ready":"True"
	I0717 18:45:19.738841   80401 node_ready.go:38] duration metric: took 10.872501ms for node "no-preload-066175" to be "Ready" ...
	I0717 18:45:19.738852   80401 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:19.744979   80401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:19.854180   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:19.873723   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:45:19.873746   80401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:45:19.883867   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:19.902041   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:45:19.902064   80401 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:45:19.926788   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:19.926867   80401 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:45:19.953788   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:20.571091   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.571119   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571119   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.571137   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571394   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.571439   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.571456   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.571463   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.571459   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.572575   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571494   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.572789   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.572761   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.572804   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.572815   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.572824   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.573027   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.573044   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.589595   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.589614   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.589913   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.589940   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.589918   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.789754   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.789776   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.790082   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.790103   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.790113   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.790123   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.790416   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.790457   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.790470   80401 addons.go:475] Verifying addon metrics-server=true in "no-preload-066175"
	I0717 18:45:20.790416   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.792175   80401 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 18:45:18.169876   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:20.170261   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:22.664656   80180 pod_ready.go:81] duration metric: took 4m0.000669682s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" ...
	E0717 18:45:22.664696   80180 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:45:22.664716   80180 pod_ready.go:38] duration metric: took 4m9.027997903s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:22.664746   80180 kubeadm.go:597] duration metric: took 4m19.955287366s to restartPrimaryControlPlane
	W0717 18:45:22.664823   80180 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:45:22.664854   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:45:20.793543   80401 addons.go:510] duration metric: took 1.283145408s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 18:45:21.766367   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:24.252243   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:24.771415   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:24.771443   80401 pod_ready.go:81] duration metric: took 5.026437249s for pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:24.771457   80401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:26.777371   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:28.778629   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:31.277550   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:31.792126   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.792154   80401 pod_ready.go:81] duration metric: took 7.020687724s for pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.792168   80401 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.798687   80401 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.798708   80401 pod_ready.go:81] duration metric: took 6.534344ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.798717   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.803428   80401 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.803452   80401 pod_ready.go:81] duration metric: took 4.727536ms for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.803464   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.815053   80401 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.815078   80401 pod_ready.go:81] duration metric: took 11.60679ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.815092   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rgp5c" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.824126   80401 pod_ready.go:92] pod "kube-proxy-rgp5c" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.824151   80401 pod_ready.go:81] duration metric: took 9.050394ms for pod "kube-proxy-rgp5c" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.824163   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:32.176378   80401 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:32.176404   80401 pod_ready.go:81] duration metric: took 352.232802ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:32.176414   80401 pod_ready.go:38] duration metric: took 12.437548785s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:32.176430   80401 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:45:32.176492   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:45:32.190918   80401 api_server.go:72] duration metric: took 12.680546008s to wait for apiserver process to appear ...
	I0717 18:45:32.190942   80401 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:45:32.190963   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:45:32.196011   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:45:32.197004   80401 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:45:32.197024   80401 api_server.go:131] duration metric: took 6.075734ms to wait for apiserver health ...
	I0717 18:45:32.197033   80401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:45:32.379383   80401 system_pods.go:59] 9 kube-system pods found
	I0717 18:45:32.379412   80401 system_pods.go:61] "coredns-5cfdc65f69-r9xns" [29624b73-848d-4a35-96bc-92f9627842fe] Running
	I0717 18:45:32.379416   80401 system_pods.go:61] "coredns-5cfdc65f69-tx7nc" [085ec394-1ca7-4b9b-9b54-b4fdab45bd75] Running
	I0717 18:45:32.379420   80401 system_pods.go:61] "etcd-no-preload-066175" [6086cbd0-137f-428e-8131-4d57b8823912] Running
	I0717 18:45:32.379423   80401 system_pods.go:61] "kube-apiserver-no-preload-066175" [c1913fea-3c1b-4563-ac80-ee1224b23a35] Running
	I0717 18:45:32.379427   80401 system_pods.go:61] "kube-controller-manager-no-preload-066175" [f6dd2ea0-be8f-4c8c-89b0-57fed0d618fd] Running
	I0717 18:45:32.379431   80401 system_pods.go:61] "kube-proxy-rgp5c" [7aaedb8f-b248-43ac-bd49-4f97d26aa1f6] Running
	I0717 18:45:32.379433   80401 system_pods.go:61] "kube-scheduler-no-preload-066175" [406fae53-d382-42c0-90db-ff9c57ccda8b] Running
	I0717 18:45:32.379439   80401 system_pods.go:61] "metrics-server-78fcd8795b-kj29z" [4b99bc9f-b5a7-4e86-b3ba-2607f9840957] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:32.379442   80401 system_pods.go:61] "storage-provisioner" [c9730cf9-c0f1-4afc-94cc-cbd825158d7c] Running
	I0717 18:45:32.379450   80401 system_pods.go:74] duration metric: took 182.412193ms to wait for pod list to return data ...
	I0717 18:45:32.379456   80401 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:45:32.576324   80401 default_sa.go:45] found service account: "default"
	I0717 18:45:32.576348   80401 default_sa.go:55] duration metric: took 196.886306ms for default service account to be created ...
	I0717 18:45:32.576357   80401 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:45:32.780237   80401 system_pods.go:86] 9 kube-system pods found
	I0717 18:45:32.780266   80401 system_pods.go:89] "coredns-5cfdc65f69-r9xns" [29624b73-848d-4a35-96bc-92f9627842fe] Running
	I0717 18:45:32.780272   80401 system_pods.go:89] "coredns-5cfdc65f69-tx7nc" [085ec394-1ca7-4b9b-9b54-b4fdab45bd75] Running
	I0717 18:45:32.780276   80401 system_pods.go:89] "etcd-no-preload-066175" [6086cbd0-137f-428e-8131-4d57b8823912] Running
	I0717 18:45:32.780280   80401 system_pods.go:89] "kube-apiserver-no-preload-066175" [c1913fea-3c1b-4563-ac80-ee1224b23a35] Running
	I0717 18:45:32.780284   80401 system_pods.go:89] "kube-controller-manager-no-preload-066175" [f6dd2ea0-be8f-4c8c-89b0-57fed0d618fd] Running
	I0717 18:45:32.780288   80401 system_pods.go:89] "kube-proxy-rgp5c" [7aaedb8f-b248-43ac-bd49-4f97d26aa1f6] Running
	I0717 18:45:32.780291   80401 system_pods.go:89] "kube-scheduler-no-preload-066175" [406fae53-d382-42c0-90db-ff9c57ccda8b] Running
	I0717 18:45:32.780298   80401 system_pods.go:89] "metrics-server-78fcd8795b-kj29z" [4b99bc9f-b5a7-4e86-b3ba-2607f9840957] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:32.780302   80401 system_pods.go:89] "storage-provisioner" [c9730cf9-c0f1-4afc-94cc-cbd825158d7c] Running
	I0717 18:45:32.780314   80401 system_pods.go:126] duration metric: took 203.948509ms to wait for k8s-apps to be running ...
	I0717 18:45:32.780323   80401 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:45:32.780368   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:32.796763   80401 system_svc.go:56] duration metric: took 16.430293ms WaitForService to wait for kubelet
	I0717 18:45:32.796791   80401 kubeadm.go:582] duration metric: took 13.286425468s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:45:32.796809   80401 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:45:32.977271   80401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:45:32.977295   80401 node_conditions.go:123] node cpu capacity is 2
	I0717 18:45:32.977305   80401 node_conditions.go:105] duration metric: took 180.491938ms to run NodePressure ...
	I0717 18:45:32.977315   80401 start.go:241] waiting for startup goroutines ...
	I0717 18:45:32.977322   80401 start.go:246] waiting for cluster config update ...
	I0717 18:45:32.977331   80401 start.go:255] writing updated cluster config ...
	I0717 18:45:32.977544   80401 ssh_runner.go:195] Run: rm -f paused
	I0717 18:45:33.022678   80401 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 18:45:33.024737   80401 out.go:177] * Done! kubectl is now configured to use "no-preload-066175" cluster and "default" namespace by default
	I0717 18:45:33.625503   81068 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.034773328s)
	I0717 18:45:33.625584   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:33.640151   81068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:33.650198   81068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:33.659027   81068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:33.659048   81068 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:33.659088   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 18:45:33.667607   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:33.667663   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:33.677632   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 18:45:33.685631   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:33.685683   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:33.694068   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 18:45:33.702840   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:33.702894   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:33.711560   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 18:45:33.719883   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:33.719928   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:33.729898   81068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:33.781672   81068 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:45:33.781776   81068 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:45:33.908046   81068 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:45:33.908199   81068 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:45:33.908366   81068 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:45:34.103926   81068 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:45:34.105872   81068 out.go:204]   - Generating certificates and keys ...
	I0717 18:45:34.105979   81068 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:45:34.106063   81068 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:45:34.106183   81068 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:45:34.106425   81068 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:45:34.106542   81068 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:45:34.106624   81068 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:45:34.106729   81068 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:45:34.106827   81068 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:45:34.106901   81068 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:45:34.106984   81068 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:45:34.107046   81068 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:45:34.107142   81068 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:45:34.390326   81068 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:45:34.442610   81068 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:45:34.692719   81068 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:45:34.777644   81068 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:45:35.101349   81068 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:45:35.102039   81068 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:45:35.104892   81068 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:45:35.106561   81068 out.go:204]   - Booting up control plane ...
	I0717 18:45:35.106689   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:45:35.106775   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:45:35.107611   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:45:35.126132   81068 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:45:35.127180   81068 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:45:35.127245   81068 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:45:35.250173   81068 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:45:35.250284   81068 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:45:35.752731   81068 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.583425ms
	I0717 18:45:35.752861   81068 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:45:40.754304   81068 kubeadm.go:310] [api-check] The API server is healthy after 5.001385597s
	I0717 18:45:40.766072   81068 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:45:40.785708   81068 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:45:40.816360   81068 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:45:40.816576   81068 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-022930 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:45:40.830588   81068 kubeadm.go:310] [bootstrap-token] Using token: kxmxsp.4wnt2q9oqhdfdirj
	I0717 18:45:40.831905   81068 out.go:204]   - Configuring RBAC rules ...
	I0717 18:45:40.832031   81068 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:45:40.840754   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:45:40.850104   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:45:40.853748   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:45:40.857341   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:45:40.860783   81068 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:45:41.161978   81068 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:45:41.600410   81068 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:45:42.161763   81068 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:45:42.163450   81068 kubeadm.go:310] 
	I0717 18:45:42.163541   81068 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:45:42.163558   81068 kubeadm.go:310] 
	I0717 18:45:42.163661   81068 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:45:42.163673   81068 kubeadm.go:310] 
	I0717 18:45:42.163707   81068 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:45:42.163797   81068 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:45:42.163870   81068 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:45:42.163881   81068 kubeadm.go:310] 
	I0717 18:45:42.163974   81068 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:45:42.163990   81068 kubeadm.go:310] 
	I0717 18:45:42.164058   81068 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:45:42.164077   81068 kubeadm.go:310] 
	I0717 18:45:42.164151   81068 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:45:42.164256   81068 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:45:42.164367   81068 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:45:42.164377   81068 kubeadm.go:310] 
	I0717 18:45:42.164489   81068 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:45:42.164588   81068 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:45:42.164595   81068 kubeadm.go:310] 
	I0717 18:45:42.164683   81068 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token kxmxsp.4wnt2q9oqhdfdirj \
	I0717 18:45:42.164826   81068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:45:42.164862   81068 kubeadm.go:310] 	--control-plane 
	I0717 18:45:42.164870   81068 kubeadm.go:310] 
	I0717 18:45:42.165002   81068 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:45:42.165012   81068 kubeadm.go:310] 
	I0717 18:45:42.165143   81068 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token kxmxsp.4wnt2q9oqhdfdirj \
	I0717 18:45:42.165257   81068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:45:42.166381   81068 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:42.166436   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:45:42.166456   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:45:42.168387   81068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:45:42.169678   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:45:42.180065   81068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:45:42.197116   81068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:45:42.197192   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:42.197217   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-022930 minikube.k8s.io/updated_at=2024_07_17T18_45_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=default-k8s-diff-port-022930 minikube.k8s.io/primary=true
	I0717 18:45:42.216456   81068 ops.go:34] apiserver oom_adj: -16
	I0717 18:45:42.370148   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:42.870732   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:43.370980   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:43.871201   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:44.370616   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:44.871007   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:45.370377   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:45.870614   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:46.370555   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:46.870513   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:47.370594   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:47.870651   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:48.370620   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:48.870863   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:49.371058   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:49.870188   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:50.370949   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:50.871187   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:51.370764   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:51.871007   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:52.370298   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:52.870917   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:53.371193   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:53.870491   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:54.370274   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:54.871160   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.370879   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.870592   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.948131   81068 kubeadm.go:1113] duration metric: took 13.751000929s to wait for elevateKubeSystemPrivileges
	I0717 18:45:55.948166   81068 kubeadm.go:394] duration metric: took 5m11.453950834s to StartCluster
	I0717 18:45:55.948188   81068 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:55.948265   81068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:45:55.950777   81068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:55.951066   81068 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:45:55.951134   81068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:45:55.951202   81068 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951237   81068 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.951247   81068 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:45:55.951243   81068 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951257   81068 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951293   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:45:55.951300   81068 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.951318   81068 addons.go:243] addon metrics-server should already be in state true
	I0717 18:45:55.951319   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.951348   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.951292   81068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-022930"
	I0717 18:45:55.951712   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951732   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951744   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951754   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.951769   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.951747   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.952885   81068 out.go:177] * Verifying Kubernetes components...
	I0717 18:45:55.954423   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:45:55.968158   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0717 18:45:55.968547   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41199
	I0717 18:45:55.968768   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.968917   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.969414   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.969436   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.969548   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.969566   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.969814   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.970012   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.970235   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:55.970413   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.970462   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.970809   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44281
	I0717 18:45:55.971165   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.974130   81068 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.974155   81068 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:45:55.974184   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.974549   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.974578   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.981608   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.981640   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.982054   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.982711   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.982754   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.990665   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0717 18:45:55.991297   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.991922   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.991938   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.992213   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.992346   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:55.993952   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:55.996135   81068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:45:55.997555   81068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:55.997579   81068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:45:55.997602   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:55.998414   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0717 18:45:55.998963   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.999540   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.999554   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.000799   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I0717 18:45:56.001014   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.001096   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.001419   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:56.001512   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.001527   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.001755   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.001929   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.002102   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.002141   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:56.002178   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:56.002255   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.002686   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:56.002709   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.003047   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.003251   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:56.004660   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:56.006355   81068 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:45:56.007646   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:45:56.007663   81068 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:45:56.007678   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:56.010711   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.011169   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.011220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.011452   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.011637   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.011806   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.011932   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.021277   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0717 18:45:56.021980   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:56.022568   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:56.022585   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.022949   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.023127   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:56.025023   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:56.025443   81068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:56.025458   81068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:45:56.025476   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:56.028095   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.028450   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.028477   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.028666   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.028853   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.029081   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.029226   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.173482   81068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:45:56.194585   81068 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-022930" to be "Ready" ...
	I0717 18:45:56.203594   81068 node_ready.go:49] node "default-k8s-diff-port-022930" has status "Ready":"True"
	I0717 18:45:56.203614   81068 node_ready.go:38] duration metric: took 8.994875ms for node "default-k8s-diff-port-022930" to be "Ready" ...
	I0717 18:45:56.203623   81068 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:56.207834   81068 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.212424   81068 pod_ready.go:92] pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.212444   81068 pod_ready.go:81] duration metric: took 4.58857ms for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.212454   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.217013   81068 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.217031   81068 pod_ready.go:81] duration metric: took 4.569971ms for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.217040   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.221441   81068 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.221458   81068 pod_ready.go:81] duration metric: took 4.411121ms for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.221470   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hnb5v" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.268740   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:45:56.268765   81068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:45:56.290194   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:56.310957   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:45:56.310981   81068 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:45:56.352789   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:56.352821   81068 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:45:56.378402   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:56.379632   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:56.518737   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.518766   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.519075   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.519097   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:56.519108   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.519117   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.519340   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.519352   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.519383   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.519426   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:56.529290   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.529317   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.529618   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.529680   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.529697   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.386401   81068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.007961919s)
	I0717 18:45:57.386463   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.386480   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.386925   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.386980   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.386999   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.387017   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.386958   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.387283   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.387304   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731240   81068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.351571451s)
	I0717 18:45:57.731287   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.731300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.731616   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.731650   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.731664   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731672   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.731685   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.731905   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.731930   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731949   81068 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-022930"
	I0717 18:45:57.731960   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.734601   81068 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 18:45:53.693038   80180 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.028164403s)
	I0717 18:45:53.693099   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:53.709020   80180 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:53.718790   80180 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:53.728384   80180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:53.728405   80180 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:53.728444   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:45:53.737315   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:53.737384   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:53.746336   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:45:53.754297   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:53.754347   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:53.763252   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:45:53.772186   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:53.772229   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:53.780829   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:45:53.788899   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:53.788955   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:53.797324   80180 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:53.982580   80180 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:57.735769   81068 addons.go:510] duration metric: took 1.784634456s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 18:45:57.742312   81068 pod_ready.go:92] pod "kube-proxy-hnb5v" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:57.742333   81068 pod_ready.go:81] duration metric: took 1.520854667s for pod "kube-proxy-hnb5v" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.742344   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.809858   81068 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:57.809885   81068 pod_ready.go:81] duration metric: took 67.527182ms for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.809896   81068 pod_ready.go:38] duration metric: took 1.606263576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:57.809914   81068 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:45:57.809972   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:45:57.847337   81068 api_server.go:72] duration metric: took 1.896234247s to wait for apiserver process to appear ...
	I0717 18:45:57.847366   81068 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:45:57.847391   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:45:57.853537   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 200:
	ok
	I0717 18:45:57.856587   81068 api_server.go:141] control plane version: v1.30.2
	I0717 18:45:57.856661   81068 api_server.go:131] duration metric: took 9.286402ms to wait for apiserver health ...
	I0717 18:45:57.856684   81068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:45:58.002336   81068 system_pods.go:59] 9 kube-system pods found
	I0717 18:45:58.002374   81068 system_pods.go:61] "coredns-7db6d8ff4d-fp4tg" [dc66092c-9183-4630-93cc-6ec4aa59a928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.002383   81068 system_pods.go:61] "coredns-7db6d8ff4d-jn64r" [35cbef26-555a-4693-afac-c739d9238a04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.002396   81068 system_pods.go:61] "etcd-default-k8s-diff-port-022930" [f83fd844-0ede-4638-b8c6-2ecdecbf4345] Running
	I0717 18:45:58.002402   81068 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-022930" [19fa3a0a-ab56-4163-b39f-2b12ce65d490] Running
	I0717 18:45:58.002408   81068 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-022930" [0037b401-ce9b-41f3-89de-47608a46a228] Running
	I0717 18:45:58.002414   81068 system_pods.go:61] "kube-proxy-hnb5v" [b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d] Running
	I0717 18:45:58.002418   81068 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-022930" [21fa54d0-9d90-492c-b90c-e5070dd2e350] Running
	I0717 18:45:58.002425   81068 system_pods.go:61] "metrics-server-569cc877fc-pfmwt" [39616dfc-215e-4af5-90f7-12fc28304494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:58.002435   81068 system_pods.go:61] "storage-provisioner" [d9b11611-2008-4a15-a661-62809bd1d4c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:58.002452   81068 system_pods.go:74] duration metric: took 145.752129ms to wait for pod list to return data ...
	I0717 18:45:58.002463   81068 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:45:58.197223   81068 default_sa.go:45] found service account: "default"
	I0717 18:45:58.197250   81068 default_sa.go:55] duration metric: took 194.774408ms for default service account to be created ...
	I0717 18:45:58.197260   81068 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:45:58.401825   81068 system_pods.go:86] 9 kube-system pods found
	I0717 18:45:58.401878   81068 system_pods.go:89] "coredns-7db6d8ff4d-fp4tg" [dc66092c-9183-4630-93cc-6ec4aa59a928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.401891   81068 system_pods.go:89] "coredns-7db6d8ff4d-jn64r" [35cbef26-555a-4693-afac-c739d9238a04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.401904   81068 system_pods.go:89] "etcd-default-k8s-diff-port-022930" [f83fd844-0ede-4638-b8c6-2ecdecbf4345] Running
	I0717 18:45:58.401917   81068 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-022930" [19fa3a0a-ab56-4163-b39f-2b12ce65d490] Running
	I0717 18:45:58.401927   81068 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-022930" [0037b401-ce9b-41f3-89de-47608a46a228] Running
	I0717 18:45:58.401935   81068 system_pods.go:89] "kube-proxy-hnb5v" [b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d] Running
	I0717 18:45:58.401940   81068 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-022930" [21fa54d0-9d90-492c-b90c-e5070dd2e350] Running
	I0717 18:45:58.401948   81068 system_pods.go:89] "metrics-server-569cc877fc-pfmwt" [39616dfc-215e-4af5-90f7-12fc28304494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:58.401956   81068 system_pods.go:89] "storage-provisioner" [d9b11611-2008-4a15-a661-62809bd1d4c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:58.401965   81068 system_pods.go:126] duration metric: took 204.700297ms to wait for k8s-apps to be running ...
	I0717 18:45:58.401975   81068 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:45:58.402024   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:58.416020   81068 system_svc.go:56] duration metric: took 14.023536ms WaitForService to wait for kubelet
	I0717 18:45:58.416056   81068 kubeadm.go:582] duration metric: took 2.464957357s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:45:58.416079   81068 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:45:58.598829   81068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:45:58.598863   81068 node_conditions.go:123] node cpu capacity is 2
	I0717 18:45:58.598876   81068 node_conditions.go:105] duration metric: took 182.791383ms to run NodePressure ...
	I0717 18:45:58.598891   81068 start.go:241] waiting for startup goroutines ...
	I0717 18:45:58.598899   81068 start.go:246] waiting for cluster config update ...
	I0717 18:45:58.598912   81068 start.go:255] writing updated cluster config ...
	I0717 18:45:58.599267   81068 ssh_runner.go:195] Run: rm -f paused
	I0717 18:45:58.661380   81068 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:45:58.663085   81068 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-022930" cluster and "default" namespace by default
	I0717 18:46:02.558673   80180 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:46:02.558766   80180 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:02.558842   80180 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:02.558980   80180 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:02.559118   80180 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:02.559210   80180 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:02.561934   80180 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:02.562036   80180 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:02.562108   80180 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:02.562191   80180 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:02.562290   80180 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:02.562393   80180 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:02.562478   80180 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:02.562565   80180 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:02.562643   80180 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:02.562711   80180 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:02.562826   80180 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:02.562886   80180 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:02.562958   80180 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:02.563005   80180 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:02.563081   80180 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:46:02.563136   80180 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:02.563210   80180 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:02.563293   80180 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:02.563405   80180 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:02.563468   80180 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:02.564989   80180 out.go:204]   - Booting up control plane ...
	I0717 18:46:02.565092   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:02.565181   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:02.565270   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:02.565400   80180 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:02.565526   80180 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:02.565597   80180 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:02.565783   80180 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:46:02.565880   80180 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:46:02.565959   80180 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.323304ms
	I0717 18:46:02.566046   80180 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:46:02.566105   80180 kubeadm.go:310] [api-check] The API server is healthy after 5.002038309s
	I0717 18:46:02.566206   80180 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:46:02.566307   80180 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:46:02.566359   80180 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:46:02.566525   80180 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-527415 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:46:02.566575   80180 kubeadm.go:310] [bootstrap-token] Using token: xeax16.7z40teb0jswemrgg
	I0717 18:46:02.568038   80180 out.go:204]   - Configuring RBAC rules ...
	I0717 18:46:02.568120   80180 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:46:02.568194   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:46:02.568314   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:46:02.568449   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:46:02.568553   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:46:02.568660   80180 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:46:02.568807   80180 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:46:02.568877   80180 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:46:02.568926   80180 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:46:02.568936   80180 kubeadm.go:310] 
	I0717 18:46:02.569032   80180 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:46:02.569044   80180 kubeadm.go:310] 
	I0717 18:46:02.569108   80180 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:46:02.569114   80180 kubeadm.go:310] 
	I0717 18:46:02.569157   80180 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:46:02.569249   80180 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:46:02.569326   80180 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:46:02.569346   80180 kubeadm.go:310] 
	I0717 18:46:02.569432   80180 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:46:02.569442   80180 kubeadm.go:310] 
	I0717 18:46:02.569511   80180 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:46:02.569519   80180 kubeadm.go:310] 
	I0717 18:46:02.569599   80180 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:46:02.569695   80180 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:46:02.569790   80180 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:46:02.569797   80180 kubeadm.go:310] 
	I0717 18:46:02.569905   80180 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:46:02.569985   80180 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:46:02.569998   80180 kubeadm.go:310] 
	I0717 18:46:02.570096   80180 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xeax16.7z40teb0jswemrgg \
	I0717 18:46:02.570234   80180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:46:02.570264   80180 kubeadm.go:310] 	--control-plane 
	I0717 18:46:02.570273   80180 kubeadm.go:310] 
	I0717 18:46:02.570348   80180 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:46:02.570355   80180 kubeadm.go:310] 
	I0717 18:46:02.570429   80180 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xeax16.7z40teb0jswemrgg \
	I0717 18:46:02.570555   80180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:46:02.570569   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:46:02.570578   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:46:02.571934   80180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:46:02.573034   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:46:02.583253   80180 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:46:02.603658   80180 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:46:02.603745   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-527415 minikube.k8s.io/updated_at=2024_07_17T18_46_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=embed-certs-527415 minikube.k8s.io/primary=true
	I0717 18:46:02.603745   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:02.621414   80180 ops.go:34] apiserver oom_adj: -16
	I0717 18:46:02.792226   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:03.292632   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:03.792270   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:04.293220   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:04.793011   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:05.292596   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:05.793043   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:06.293286   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:06.793069   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:07.292569   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:07.792604   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:08.293028   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:08.792259   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:09.292273   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:09.792672   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.293080   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.792442   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.292894   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.792436   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.292411   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.792327   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.292909   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.792878   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.293188   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.793038   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.292453   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.792367   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.898487   80180 kubeadm.go:1113] duration metric: took 13.294815165s to wait for elevateKubeSystemPrivileges
	I0717 18:46:15.898528   80180 kubeadm.go:394] duration metric: took 5m13.234208822s to StartCluster
	I0717 18:46:15.898546   80180 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:15.898626   80180 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:46:15.900239   80180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:15.900462   80180 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:46:15.900564   80180 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:46:15.900648   80180 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:46:15.900655   80180 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-527415"
	I0717 18:46:15.900667   80180 addons.go:69] Setting default-storageclass=true in profile "embed-certs-527415"
	I0717 18:46:15.900691   80180 addons.go:69] Setting metrics-server=true in profile "embed-certs-527415"
	I0717 18:46:15.900704   80180 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-527415"
	I0717 18:46:15.900709   80180 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-527415"
	I0717 18:46:15.900714   80180 addons.go:234] Setting addon metrics-server=true in "embed-certs-527415"
	W0717 18:46:15.900747   80180 addons.go:243] addon metrics-server should already be in state true
	I0717 18:46:15.900777   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	W0717 18:46:15.900715   80180 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:46:15.900852   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:46:15.901106   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901150   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.901152   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901183   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.901264   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901298   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.902177   80180 out.go:177] * Verifying Kubernetes components...
	I0717 18:46:15.903698   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:46:15.918294   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40333
	I0717 18:46:15.918295   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I0717 18:46:15.918859   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.918909   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.919433   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.919455   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.919478   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40379
	I0717 18:46:15.919548   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.919572   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.919788   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.919875   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.919883   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.920316   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.920323   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.920338   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.920345   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.920387   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.920425   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.920695   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.920890   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.924623   80180 addons.go:234] Setting addon default-storageclass=true in "embed-certs-527415"
	W0717 18:46:15.924644   80180 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:46:15.924672   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:46:15.925801   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.925830   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.936020   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
	I0717 18:46:15.936280   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0717 18:46:15.936365   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.936674   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.937144   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.937164   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.937229   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.937239   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.937565   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.937587   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.937770   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.937872   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.939671   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.939856   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.941929   80180 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:46:15.941934   80180 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:46:15.943632   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:46:15.943650   80180 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:46:15.943668   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.943715   80180 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:15.943724   80180 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:46:15.943737   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.946283   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33675
	I0717 18:46:15.946815   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.947230   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.947240   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.947272   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.947953   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.947987   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948001   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.948179   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.948223   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948248   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.948388   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.948604   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:15.948627   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.948653   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948832   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.948870   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.948895   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.949086   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.949307   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.949454   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:15.969385   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0717 18:46:15.969789   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.970221   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.970241   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.970756   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.970963   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.972631   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.972849   80180 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:15.972868   80180 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:46:15.972889   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.975680   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.976123   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.976187   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.976320   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.976496   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.976657   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.976748   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:16.134605   80180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:46:16.206139   80180 node_ready.go:35] waiting up to 6m0s for node "embed-certs-527415" to be "Ready" ...
	I0717 18:46:16.214532   80180 node_ready.go:49] node "embed-certs-527415" has status "Ready":"True"
	I0717 18:46:16.214550   80180 node_ready.go:38] duration metric: took 8.382109ms for node "embed-certs-527415" to be "Ready" ...
	I0717 18:46:16.214568   80180 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:46:16.223573   80180 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:16.254146   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:46:16.254166   80180 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:46:16.293257   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:16.312304   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:16.334927   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:46:16.334949   80180 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:46:16.404696   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:16.404723   80180 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:46:16.462835   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:17.281062   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281088   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281062   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281157   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281395   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281402   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281415   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281415   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281424   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281427   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281432   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281436   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281676   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.281678   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281700   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281705   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.281722   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281732   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.300264   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.300294   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.300592   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.300643   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.300672   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.489477   80180 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026593042s)
	I0717 18:46:17.489520   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.489534   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.490020   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.490047   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.490055   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.490068   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.490077   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.490344   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.490373   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.490384   80180 addons.go:475] Verifying addon metrics-server=true in "embed-certs-527415"
	I0717 18:46:17.490397   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.492257   80180 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 18:46:17.493487   80180 addons.go:510] duration metric: took 1.592928152s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 18:46:18.230569   80180 pod_ready.go:92] pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.230592   80180 pod_ready.go:81] duration metric: took 2.006995421s for pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.230603   80180 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.235298   80180 pod_ready.go:92] pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.235317   80180 pod_ready.go:81] duration metric: took 4.707534ms for pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.235327   80180 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.238998   80180 pod_ready.go:92] pod "etcd-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.239015   80180 pod_ready.go:81] duration metric: took 3.681191ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.239023   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.242949   80180 pod_ready.go:92] pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.242967   80180 pod_ready.go:81] duration metric: took 3.937614ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.242977   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.246567   80180 pod_ready.go:92] pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.246580   80180 pod_ready.go:81] duration metric: took 3.597434ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.246588   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m52fq" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.628607   80180 pod_ready.go:92] pod "kube-proxy-m52fq" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.628636   80180 pod_ready.go:81] duration metric: took 382.042151ms for pod "kube-proxy-m52fq" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.628650   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:19.028536   80180 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:19.028558   80180 pod_ready.go:81] duration metric: took 399.900565ms for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:19.028565   80180 pod_ready.go:38] duration metric: took 2.813989212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:46:19.028578   80180 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:46:19.028630   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:46:19.044787   80180 api_server.go:72] duration metric: took 3.144295616s to wait for apiserver process to appear ...
	I0717 18:46:19.044810   80180 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:46:19.044825   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:46:19.051106   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:46:19.052094   80180 api_server.go:141] control plane version: v1.30.2
	I0717 18:46:19.052111   80180 api_server.go:131] duration metric: took 7.296406ms to wait for apiserver health ...
	I0717 18:46:19.052117   80180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:46:19.231877   80180 system_pods.go:59] 9 kube-system pods found
	I0717 18:46:19.231905   80180 system_pods.go:61] "coredns-7db6d8ff4d-2zt8k" [5e2e90bb-5721-4ca8-8177-77e6b686175a] Running
	I0717 18:46:19.231912   80180 system_pods.go:61] "coredns-7db6d8ff4d-f64kh" [f0de6ef4-1402-44b2-81f3-3f234a72d151] Running
	I0717 18:46:19.231916   80180 system_pods.go:61] "etcd-embed-certs-527415" [79d210fe-c4d9-476f-ab78-cce3b98c1c95] Running
	I0717 18:46:19.231921   80180 system_pods.go:61] "kube-apiserver-embed-certs-527415" [8b43654e-7127-4e43-91e6-1239bf66661d] Running
	I0717 18:46:19.231925   80180 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [55da9f4c-566b-4f82-a700-236d117bd9a4] Running
	I0717 18:46:19.231929   80180 system_pods.go:61] "kube-proxy-m52fq" [40f99883-b343-43b3-8f94-4b45b379a17b] Running
	I0717 18:46:19.231934   80180 system_pods.go:61] "kube-scheduler-embed-certs-527415" [e6031b0b-5aa6-4827-b41a-a422d05c0b9a] Running
	I0717 18:46:19.231942   80180 system_pods.go:61] "metrics-server-569cc877fc-hvxtg" [05a18f70-4284-4315-892e-2850ac8b5050] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:46:19.231947   80180 system_pods.go:61] "storage-provisioner" [5f473bbe-0727-4f25-ba39-4ed322767465] Running
	I0717 18:46:19.231957   80180 system_pods.go:74] duration metric: took 179.833729ms to wait for pod list to return data ...
	I0717 18:46:19.231966   80180 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:46:19.427972   80180 default_sa.go:45] found service account: "default"
	I0717 18:46:19.427994   80180 default_sa.go:55] duration metric: took 196.021611ms for default service account to be created ...
	I0717 18:46:19.428002   80180 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:46:19.630730   80180 system_pods.go:86] 9 kube-system pods found
	I0717 18:46:19.630755   80180 system_pods.go:89] "coredns-7db6d8ff4d-2zt8k" [5e2e90bb-5721-4ca8-8177-77e6b686175a] Running
	I0717 18:46:19.630760   80180 system_pods.go:89] "coredns-7db6d8ff4d-f64kh" [f0de6ef4-1402-44b2-81f3-3f234a72d151] Running
	I0717 18:46:19.630765   80180 system_pods.go:89] "etcd-embed-certs-527415" [79d210fe-c4d9-476f-ab78-cce3b98c1c95] Running
	I0717 18:46:19.630769   80180 system_pods.go:89] "kube-apiserver-embed-certs-527415" [8b43654e-7127-4e43-91e6-1239bf66661d] Running
	I0717 18:46:19.630774   80180 system_pods.go:89] "kube-controller-manager-embed-certs-527415" [55da9f4c-566b-4f82-a700-236d117bd9a4] Running
	I0717 18:46:19.630778   80180 system_pods.go:89] "kube-proxy-m52fq" [40f99883-b343-43b3-8f94-4b45b379a17b] Running
	I0717 18:46:19.630782   80180 system_pods.go:89] "kube-scheduler-embed-certs-527415" [e6031b0b-5aa6-4827-b41a-a422d05c0b9a] Running
	I0717 18:46:19.630788   80180 system_pods.go:89] "metrics-server-569cc877fc-hvxtg" [05a18f70-4284-4315-892e-2850ac8b5050] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:46:19.630792   80180 system_pods.go:89] "storage-provisioner" [5f473bbe-0727-4f25-ba39-4ed322767465] Running
	I0717 18:46:19.630800   80180 system_pods.go:126] duration metric: took 202.793522ms to wait for k8s-apps to be running ...
	I0717 18:46:19.630806   80180 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:46:19.630849   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:19.646111   80180 system_svc.go:56] duration metric: took 15.296964ms WaitForService to wait for kubelet
	I0717 18:46:19.646133   80180 kubeadm.go:582] duration metric: took 3.745647205s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:46:19.646149   80180 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:46:19.828333   80180 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:46:19.828356   80180 node_conditions.go:123] node cpu capacity is 2
	I0717 18:46:19.828368   80180 node_conditions.go:105] duration metric: took 182.213813ms to run NodePressure ...
	I0717 18:46:19.828381   80180 start.go:241] waiting for startup goroutines ...
	I0717 18:46:19.828389   80180 start.go:246] waiting for cluster config update ...
	I0717 18:46:19.828401   80180 start.go:255] writing updated cluster config ...
	I0717 18:46:19.828690   80180 ssh_runner.go:195] Run: rm -f paused
	I0717 18:46:19.877774   80180 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:46:19.879769   80180 out.go:177] * Done! kubectl is now configured to use "embed-certs-527415" cluster and "default" namespace by default
	I0717 18:46:33.124646   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:46:33.124790   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:46:33.126245   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.126307   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.126409   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.126547   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.126673   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:33.126734   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:33.128541   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:33.128626   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:33.128707   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:33.128817   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:33.128901   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:33.129018   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:33.129091   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:33.129172   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:33.129249   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:33.129339   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:33.129408   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:33.129444   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:33.129532   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:33.129603   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:33.129665   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:33.129765   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:33.129812   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:33.129929   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:33.130037   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:33.130093   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:33.130177   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:33.131546   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:33.131652   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:33.131750   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:33.131858   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:33.131939   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:33.132085   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:46:33.132133   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:46:33.132189   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132355   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132419   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132585   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132657   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132839   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132900   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133143   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133248   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133452   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133460   80857 kubeadm.go:310] 
	I0717 18:46:33.133494   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:46:33.133529   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:46:33.133535   80857 kubeadm.go:310] 
	I0717 18:46:33.133564   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:46:33.133599   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:46:33.133727   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:46:33.133752   80857 kubeadm.go:310] 
	I0717 18:46:33.133905   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:46:33.133947   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:46:33.134002   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:46:33.134012   80857 kubeadm.go:310] 
	I0717 18:46:33.134116   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:46:33.134186   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:46:33.134193   80857 kubeadm.go:310] 
	I0717 18:46:33.134290   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:46:33.134367   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:46:33.134431   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:46:33.134491   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:46:33.134533   80857 kubeadm.go:310] 
	W0717 18:46:33.134615   80857 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 18:46:33.134669   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:46:33.590879   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:33.605393   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:46:33.614382   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:46:33.614405   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:46:33.614450   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:46:33.622849   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:46:33.622905   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:46:33.631852   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:46:33.640160   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:46:33.640211   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:46:33.648774   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.656740   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:46:33.656796   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.665799   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:46:33.674492   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:46:33.674547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:46:33.683627   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:46:33.746405   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.746472   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.881152   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.881297   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.881443   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:34.053199   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:34.055757   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:34.055843   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:34.055918   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:34.056030   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:34.056129   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:34.056232   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:34.056336   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:34.056431   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:34.056524   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:34.056656   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:34.056764   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:34.056824   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:34.056900   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:34.276456   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:34.491418   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:34.702265   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:34.874511   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:34.895484   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:34.896451   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:34.896536   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:35.040208   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:35.042291   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:35.042437   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:35.042565   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:35.044391   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:35.046206   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:35.050843   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:47:15.053070   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:47:15.053416   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:15.053586   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:20.053963   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:20.054207   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:30.054801   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:30.055011   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:50.055270   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:50.055465   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.053919   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:48:30.054133   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.054148   80857 kubeadm.go:310] 
	I0717 18:48:30.054231   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:48:30.054300   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:48:30.054326   80857 kubeadm.go:310] 
	I0717 18:48:30.054386   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:48:30.054443   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:48:30.054581   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:48:30.054593   80857 kubeadm.go:310] 
	I0717 18:48:30.054715   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:48:30.054761   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:48:30.054810   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:48:30.054818   80857 kubeadm.go:310] 
	I0717 18:48:30.054970   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:48:30.055069   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:48:30.055081   80857 kubeadm.go:310] 
	I0717 18:48:30.055236   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:48:30.055332   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:48:30.055396   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:48:30.055457   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:48:30.055483   80857 kubeadm.go:310] 
	I0717 18:48:30.056139   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:48:30.056246   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:48:30.056338   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:48:30.056413   80857 kubeadm.go:394] duration metric: took 8m2.908780359s to StartCluster
	I0717 18:48:30.056461   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:48:30.056524   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:48:30.102640   80857 cri.go:89] found id: ""
	I0717 18:48:30.102662   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.102669   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:48:30.102674   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:48:30.102724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:48:30.142516   80857 cri.go:89] found id: ""
	I0717 18:48:30.142548   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.142559   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:48:30.142567   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:48:30.142630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:48:30.178558   80857 cri.go:89] found id: ""
	I0717 18:48:30.178589   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.178598   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:48:30.178604   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:48:30.178677   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:48:30.211146   80857 cri.go:89] found id: ""
	I0717 18:48:30.211177   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.211186   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:48:30.211192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:48:30.211242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:48:30.244287   80857 cri.go:89] found id: ""
	I0717 18:48:30.244308   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.244314   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:48:30.244319   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:48:30.244364   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:48:30.274547   80857 cri.go:89] found id: ""
	I0717 18:48:30.274577   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.274587   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:48:30.274594   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:48:30.274660   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:48:30.306796   80857 cri.go:89] found id: ""
	I0717 18:48:30.306825   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.306835   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:48:30.306842   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:48:30.306903   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:48:30.341938   80857 cri.go:89] found id: ""
	I0717 18:48:30.341962   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.341972   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:48:30.341982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:48:30.341997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:48:30.407881   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:48:30.407925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:48:30.430885   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:48:30.430913   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:48:30.525366   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:48:30.525394   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:48:30.525408   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:48:30.639556   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:48:30.639588   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 18:48:30.677493   80857 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 18:48:30.677544   80857 out.go:239] * 
	W0717 18:48:30.677604   80857 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.677636   80857 out.go:239] * 
	W0717 18:48:30.678483   80857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:48:30.681792   80857 out.go:177] 
	W0717 18:48:30.682976   80857 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.683034   80857 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 18:48:30.683050   80857 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 18:48:30.684325   80857 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.793947889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242655793921789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c85b6a8e-4fc4-444d-8c64-2cc7e7001fea name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.794406416Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e571ec3-8ba9-4aed-a43c-6b6a6ac80008 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.794463800Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e571ec3-8ba9-4aed-a43c-6b6a6ac80008 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.794501714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1e571ec3-8ba9-4aed-a43c-6b6a6ac80008 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.829440918Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a87250c7-7b92-4ba0-bd16-64624773d9ad name=/runtime.v1.RuntimeService/Version
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.829540897Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a87250c7-7b92-4ba0-bd16-64624773d9ad name=/runtime.v1.RuntimeService/Version
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.830664317Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56ad6842-9a37-41b5-bd98-c3718b0abdc0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.831147632Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242655831103966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56ad6842-9a37-41b5-bd98-c3718b0abdc0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.831775593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aeeac945-35f6-4ca9-9ea1-71f5ed354090 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.831851094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aeeac945-35f6-4ca9-9ea1-71f5ed354090 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.831903932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aeeac945-35f6-4ca9-9ea1-71f5ed354090 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.861014963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc1b7270-18bf-41ca-a90b-ba1473738ef9 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.861099494Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc1b7270-18bf-41ca-a90b-ba1473738ef9 name=/runtime.v1.RuntimeService/Version
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.861952712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f86fed87-f5d6-4526-8bab-5a54db33bc9c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.862339744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242655862310049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f86fed87-f5d6-4526-8bab-5a54db33bc9c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.862841100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d574a29d-cc61-4432-9c15-e5bc19c4602a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.862886958Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d574a29d-cc61-4432-9c15-e5bc19c4602a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.862920019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d574a29d-cc61-4432-9c15-e5bc19c4602a name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.891908387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=299fd626-3967-4ce0-aead-fae665630edb name=/runtime.v1.RuntimeService/Version
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.892000318Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=299fd626-3967-4ce0-aead-fae665630edb name=/runtime.v1.RuntimeService/Version
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.892944596Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6829ed6e-817f-45aa-bf19-42e307623463 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.893318547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242655893297696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6829ed6e-817f-45aa-bf19-42e307623463 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.893833813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a71bfa51-b929-4fd1-9ba4-a5e55ecebb05 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.893902546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a71bfa51-b929-4fd1-9ba4-a5e55ecebb05 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 18:57:35 old-k8s-version-019549 crio[648]: time="2024-07-17 18:57:35.893938960Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a71bfa51-b929-4fd1-9ba4-a5e55ecebb05 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul17 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051628] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040768] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.517042] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.721932] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.548665] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.018518] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.058706] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069719] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.203391] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.148278] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.237346] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.350758] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.060103] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.283579] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[ +13.881143] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 18:44] systemd-fstab-generator[5065]: Ignoring "noauto" option for root device
	[Jul17 18:46] systemd-fstab-generator[5343]: Ignoring "noauto" option for root device
	[  +0.061949] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:57:36 up 17 min,  0 users,  load average: 0.16, 0.05, 0.05
	Linux old-k8s-version-019549 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]:         /usr/local/go/src/net/sock_posix.go:70 +0x1c5
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]: net.internetSocket(0x4f7fe40, 0xc0002034a0, 0x48ab5d6, 0x3, 0x4fb9160, 0x0, 0x4fb9160, 0xc00097fb30, 0x1, 0x0, ...)
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]:         /usr/local/go/src/net/ipsock_posix.go:141 +0x145
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]: net.(*sysDialer).doDialTCP(0xc0008f6480, 0x4f7fe40, 0xc0002034a0, 0x0, 0xc00097fb30, 0x3fddce0, 0x70f9210, 0x0)
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]:         /usr/local/go/src/net/tcpsock_posix.go:65 +0xc5
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]: net.(*sysDialer).dialTCP(0xc0008f6480, 0x4f7fe40, 0xc0002034a0, 0x0, 0xc00097fb30, 0x57b620, 0x48ab5d6, 0x7f81309415a0)
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]:         /usr/local/go/src/net/tcpsock_posix.go:61 +0xd7
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]: net.(*sysDialer).dialSingle(0xc0008f6480, 0x4f7fe40, 0xc0002034a0, 0x4f1ff00, 0xc00097fb30, 0x0, 0x0, 0x0, 0x0)
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]: net.(*sysDialer).dialSerial(0xc0008f6480, 0x4f7fe40, 0xc0002034a0, 0xc0009c81a0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]:         /usr/local/go/src/net/dial.go:548 +0x152
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]: net.(*Dialer).DialContext(0xc0001a6900, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009ca000, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000713500, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009ca000, 0x24, 0x60, 0x7f8130941738, 0x118, ...)
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]: net/http.(*Transport).dial(0xc00066c000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009ca000, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]: net/http.(*Transport).dialConn(0xc00066c000, 0x4f7fe00, 0xc000052030, 0x0, 0xc00087f4a0, 0x5, 0xc0009ca000, 0x24, 0x0, 0xc0009bc120, ...)
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]: net/http.(*Transport).dialConnFor(0xc00066c000, 0xc00089e580)
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]: created by net/http.(*Transport).queueForDial
	Jul 17 18:57:35 old-k8s-version-019549 kubelet[6542]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 17 18:57:35 old-k8s-version-019549 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 18:57:35 old-k8s-version-019549 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019549 -n old-k8s-version-019549
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019549 -n old-k8s-version-019549: exit status 2 (222.25222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-019549" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (378.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-066175 -n no-preload-066175
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-17 19:00:54.429926028 +0000 UTC m=+6569.159121990
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-066175 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-066175 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.838µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-066175 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-066175 -n no-preload-066175
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-066175 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-066175 logs -n 25: (1.217062515s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-527415            | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-371172                                        | pause-371172                 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-341716 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | disable-driver-mounts-341716                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:34 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-066175             | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC | 17 Jul 24 18:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-066175                                   | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-022930  | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC | 17 Jul 24 18:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-527415                 | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-019549        | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-066175                  | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-066175 --memory=2200                     | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:45 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-019549             | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-022930       | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC | 17 Jul 24 18:45 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 19:00 UTC | 17 Jul 24 19:00 UTC |
	| start   | -p newest-cni-875270 --memory=2200 --alsologtostderr   | newest-cni-875270            | jenkins | v1.33.1 | 17 Jul 24 19:00 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:00:43
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:00:43.979043   87211 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:00:43.979200   87211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:00:43.979207   87211 out.go:304] Setting ErrFile to fd 2...
	I0717 19:00:43.979213   87211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:00:43.979429   87211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 19:00:43.980043   87211 out.go:298] Setting JSON to false
	I0717 19:00:43.981243   87211 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9787,"bootTime":1721233057,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:00:43.981316   87211 start.go:139] virtualization: kvm guest
	I0717 19:00:43.983421   87211 out.go:177] * [newest-cni-875270] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:00:43.984734   87211 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 19:00:43.984809   87211 notify.go:220] Checking for updates...
	I0717 19:00:43.987182   87211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:00:43.988431   87211 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 19:00:43.989690   87211 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 19:00:43.990872   87211 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:00:43.991912   87211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:00:43.993425   87211 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:00:43.993510   87211 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:00:43.993597   87211 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:00:43.993687   87211 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:00:44.030158   87211 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 19:00:44.031277   87211 start.go:297] selected driver: kvm2
	I0717 19:00:44.031300   87211 start.go:901] validating driver "kvm2" against <nil>
	I0717 19:00:44.031315   87211 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:00:44.032177   87211 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:00:44.032296   87211 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:00:44.047964   87211 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:00:44.048010   87211 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0717 19:00:44.048032   87211 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0717 19:00:44.048321   87211 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 19:00:44.048349   87211 cni.go:84] Creating CNI manager for ""
	I0717 19:00:44.048361   87211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:00:44.048371   87211 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 19:00:44.048443   87211 start.go:340] cluster config:
	{Name:newest-cni-875270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-875270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:00:44.048578   87211 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:00:44.050341   87211 out.go:177] * Starting "newest-cni-875270" primary control-plane node in "newest-cni-875270" cluster
	I0717 19:00:44.051379   87211 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:00:44.051411   87211 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:00:44.051418   87211 cache.go:56] Caching tarball of preloaded images
	I0717 19:00:44.051521   87211 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:00:44.051533   87211 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 19:00:44.051620   87211 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/config.json ...
	I0717 19:00:44.051637   87211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/config.json: {Name:mk044a77ab4f3aa203a7005f385efe96ee1d0310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:00:44.051762   87211 start.go:360] acquireMachinesLock for newest-cni-875270: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:00:44.051789   87211 start.go:364] duration metric: took 15.052µs to acquireMachinesLock for "newest-cni-875270"
	I0717 19:00:44.051805   87211 start.go:93] Provisioning new machine with config: &{Name:newest-cni-875270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-875270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:00:44.051867   87211 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 19:00:44.053362   87211 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 19:00:44.053494   87211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:00:44.053531   87211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:00:44.068348   87211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0717 19:00:44.068858   87211 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:00:44.069443   87211 main.go:141] libmachine: Using API Version  1
	I0717 19:00:44.069466   87211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:00:44.069811   87211 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:00:44.069989   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetMachineName
	I0717 19:00:44.070148   87211 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:00:44.070293   87211 start.go:159] libmachine.API.Create for "newest-cni-875270" (driver="kvm2")
	I0717 19:00:44.070324   87211 client.go:168] LocalClient.Create starting
	I0717 19:00:44.070351   87211 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 19:00:44.070412   87211 main.go:141] libmachine: Decoding PEM data...
	I0717 19:00:44.070428   87211 main.go:141] libmachine: Parsing certificate...
	I0717 19:00:44.070476   87211 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 19:00:44.070494   87211 main.go:141] libmachine: Decoding PEM data...
	I0717 19:00:44.070503   87211 main.go:141] libmachine: Parsing certificate...
	I0717 19:00:44.070534   87211 main.go:141] libmachine: Running pre-create checks...
	I0717 19:00:44.070542   87211 main.go:141] libmachine: (newest-cni-875270) Calling .PreCreateCheck
	I0717 19:00:44.070902   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetConfigRaw
	I0717 19:00:44.071271   87211 main.go:141] libmachine: Creating machine...
	I0717 19:00:44.071292   87211 main.go:141] libmachine: (newest-cni-875270) Calling .Create
	I0717 19:00:44.071424   87211 main.go:141] libmachine: (newest-cni-875270) Creating KVM machine...
	I0717 19:00:44.072642   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found existing default KVM network
	I0717 19:00:44.074382   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:44.074238   87234 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f800}
	I0717 19:00:44.074413   87211 main.go:141] libmachine: (newest-cni-875270) DBG | created network xml: 
	I0717 19:00:44.074422   87211 main.go:141] libmachine: (newest-cni-875270) DBG | <network>
	I0717 19:00:44.074427   87211 main.go:141] libmachine: (newest-cni-875270) DBG |   <name>mk-newest-cni-875270</name>
	I0717 19:00:44.074433   87211 main.go:141] libmachine: (newest-cni-875270) DBG |   <dns enable='no'/>
	I0717 19:00:44.074437   87211 main.go:141] libmachine: (newest-cni-875270) DBG |   
	I0717 19:00:44.074443   87211 main.go:141] libmachine: (newest-cni-875270) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 19:00:44.074448   87211 main.go:141] libmachine: (newest-cni-875270) DBG |     <dhcp>
	I0717 19:00:44.074454   87211 main.go:141] libmachine: (newest-cni-875270) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 19:00:44.074458   87211 main.go:141] libmachine: (newest-cni-875270) DBG |     </dhcp>
	I0717 19:00:44.074463   87211 main.go:141] libmachine: (newest-cni-875270) DBG |   </ip>
	I0717 19:00:44.074472   87211 main.go:141] libmachine: (newest-cni-875270) DBG |   
	I0717 19:00:44.074477   87211 main.go:141] libmachine: (newest-cni-875270) DBG | </network>
	I0717 19:00:44.074482   87211 main.go:141] libmachine: (newest-cni-875270) DBG | 
	I0717 19:00:44.079690   87211 main.go:141] libmachine: (newest-cni-875270) DBG | trying to create private KVM network mk-newest-cni-875270 192.168.39.0/24...
	I0717 19:00:44.150454   87211 main.go:141] libmachine: (newest-cni-875270) DBG | private KVM network mk-newest-cni-875270 192.168.39.0/24 created
	I0717 19:00:44.150478   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:44.150427   87234 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 19:00:44.150491   87211 main.go:141] libmachine: (newest-cni-875270) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270 ...
	I0717 19:00:44.150507   87211 main.go:141] libmachine: (newest-cni-875270) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 19:00:44.150600   87211 main.go:141] libmachine: (newest-cni-875270) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 19:00:44.386986   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:44.386867   87234 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa...
	I0717 19:00:44.565767   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:44.565598   87234 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/newest-cni-875270.rawdisk...
	I0717 19:00:44.565806   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Writing magic tar header
	I0717 19:00:44.565826   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Writing SSH key tar header
	I0717 19:00:44.565839   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:44.565763   87234 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270 ...
	I0717 19:00:44.565969   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270
	I0717 19:00:44.566000   87211 main.go:141] libmachine: (newest-cni-875270) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270 (perms=drwx------)
	I0717 19:00:44.566012   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 19:00:44.566028   87211 main.go:141] libmachine: (newest-cni-875270) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 19:00:44.566047   87211 main.go:141] libmachine: (newest-cni-875270) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 19:00:44.566062   87211 main.go:141] libmachine: (newest-cni-875270) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 19:00:44.566080   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 19:00:44.566094   87211 main.go:141] libmachine: (newest-cni-875270) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 19:00:44.566107   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 19:00:44.566122   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 19:00:44.566134   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home/jenkins
	I0717 19:00:44.566152   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home
	I0717 19:00:44.566163   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Skipping /home - not owner
	I0717 19:00:44.566174   87211 main.go:141] libmachine: (newest-cni-875270) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 19:00:44.566189   87211 main.go:141] libmachine: (newest-cni-875270) Creating domain...
	I0717 19:00:44.567326   87211 main.go:141] libmachine: (newest-cni-875270) define libvirt domain using xml: 
	I0717 19:00:44.567346   87211 main.go:141] libmachine: (newest-cni-875270) <domain type='kvm'>
	I0717 19:00:44.567354   87211 main.go:141] libmachine: (newest-cni-875270)   <name>newest-cni-875270</name>
	I0717 19:00:44.567362   87211 main.go:141] libmachine: (newest-cni-875270)   <memory unit='MiB'>2200</memory>
	I0717 19:00:44.567367   87211 main.go:141] libmachine: (newest-cni-875270)   <vcpu>2</vcpu>
	I0717 19:00:44.567371   87211 main.go:141] libmachine: (newest-cni-875270)   <features>
	I0717 19:00:44.567376   87211 main.go:141] libmachine: (newest-cni-875270)     <acpi/>
	I0717 19:00:44.567383   87211 main.go:141] libmachine: (newest-cni-875270)     <apic/>
	I0717 19:00:44.567390   87211 main.go:141] libmachine: (newest-cni-875270)     <pae/>
	I0717 19:00:44.567399   87211 main.go:141] libmachine: (newest-cni-875270)     
	I0717 19:00:44.567408   87211 main.go:141] libmachine: (newest-cni-875270)   </features>
	I0717 19:00:44.567418   87211 main.go:141] libmachine: (newest-cni-875270)   <cpu mode='host-passthrough'>
	I0717 19:00:44.567427   87211 main.go:141] libmachine: (newest-cni-875270)   
	I0717 19:00:44.567432   87211 main.go:141] libmachine: (newest-cni-875270)   </cpu>
	I0717 19:00:44.567437   87211 main.go:141] libmachine: (newest-cni-875270)   <os>
	I0717 19:00:44.567441   87211 main.go:141] libmachine: (newest-cni-875270)     <type>hvm</type>
	I0717 19:00:44.567449   87211 main.go:141] libmachine: (newest-cni-875270)     <boot dev='cdrom'/>
	I0717 19:00:44.567453   87211 main.go:141] libmachine: (newest-cni-875270)     <boot dev='hd'/>
	I0717 19:00:44.567459   87211 main.go:141] libmachine: (newest-cni-875270)     <bootmenu enable='no'/>
	I0717 19:00:44.567465   87211 main.go:141] libmachine: (newest-cni-875270)   </os>
	I0717 19:00:44.567470   87211 main.go:141] libmachine: (newest-cni-875270)   <devices>
	I0717 19:00:44.567475   87211 main.go:141] libmachine: (newest-cni-875270)     <disk type='file' device='cdrom'>
	I0717 19:00:44.567486   87211 main.go:141] libmachine: (newest-cni-875270)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/boot2docker.iso'/>
	I0717 19:00:44.567514   87211 main.go:141] libmachine: (newest-cni-875270)       <target dev='hdc' bus='scsi'/>
	I0717 19:00:44.567524   87211 main.go:141] libmachine: (newest-cni-875270)       <readonly/>
	I0717 19:00:44.567534   87211 main.go:141] libmachine: (newest-cni-875270)     </disk>
	I0717 19:00:44.567544   87211 main.go:141] libmachine: (newest-cni-875270)     <disk type='file' device='disk'>
	I0717 19:00:44.567553   87211 main.go:141] libmachine: (newest-cni-875270)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 19:00:44.567564   87211 main.go:141] libmachine: (newest-cni-875270)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/newest-cni-875270.rawdisk'/>
	I0717 19:00:44.567572   87211 main.go:141] libmachine: (newest-cni-875270)       <target dev='hda' bus='virtio'/>
	I0717 19:00:44.567597   87211 main.go:141] libmachine: (newest-cni-875270)     </disk>
	I0717 19:00:44.567624   87211 main.go:141] libmachine: (newest-cni-875270)     <interface type='network'>
	I0717 19:00:44.567639   87211 main.go:141] libmachine: (newest-cni-875270)       <source network='mk-newest-cni-875270'/>
	I0717 19:00:44.567651   87211 main.go:141] libmachine: (newest-cni-875270)       <model type='virtio'/>
	I0717 19:00:44.567663   87211 main.go:141] libmachine: (newest-cni-875270)     </interface>
	I0717 19:00:44.567679   87211 main.go:141] libmachine: (newest-cni-875270)     <interface type='network'>
	I0717 19:00:44.567691   87211 main.go:141] libmachine: (newest-cni-875270)       <source network='default'/>
	I0717 19:00:44.567710   87211 main.go:141] libmachine: (newest-cni-875270)       <model type='virtio'/>
	I0717 19:00:44.567752   87211 main.go:141] libmachine: (newest-cni-875270)     </interface>
	I0717 19:00:44.567772   87211 main.go:141] libmachine: (newest-cni-875270)     <serial type='pty'>
	I0717 19:00:44.567782   87211 main.go:141] libmachine: (newest-cni-875270)       <target port='0'/>
	I0717 19:00:44.567789   87211 main.go:141] libmachine: (newest-cni-875270)     </serial>
	I0717 19:00:44.567800   87211 main.go:141] libmachine: (newest-cni-875270)     <console type='pty'>
	I0717 19:00:44.567811   87211 main.go:141] libmachine: (newest-cni-875270)       <target type='serial' port='0'/>
	I0717 19:00:44.567821   87211 main.go:141] libmachine: (newest-cni-875270)     </console>
	I0717 19:00:44.567831   87211 main.go:141] libmachine: (newest-cni-875270)     <rng model='virtio'>
	I0717 19:00:44.567849   87211 main.go:141] libmachine: (newest-cni-875270)       <backend model='random'>/dev/random</backend>
	I0717 19:00:44.567884   87211 main.go:141] libmachine: (newest-cni-875270)     </rng>
	I0717 19:00:44.567898   87211 main.go:141] libmachine: (newest-cni-875270)     
	I0717 19:00:44.567902   87211 main.go:141] libmachine: (newest-cni-875270)     
	I0717 19:00:44.567910   87211 main.go:141] libmachine: (newest-cni-875270)   </devices>
	I0717 19:00:44.567915   87211 main.go:141] libmachine: (newest-cni-875270) </domain>
	I0717 19:00:44.567923   87211 main.go:141] libmachine: (newest-cni-875270) 
	I0717 19:00:44.572539   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:3f:27:1d in network default
	I0717 19:00:44.573331   87211 main.go:141] libmachine: (newest-cni-875270) Ensuring networks are active...
	I0717 19:00:44.573367   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:44.574135   87211 main.go:141] libmachine: (newest-cni-875270) Ensuring network default is active
	I0717 19:00:44.574471   87211 main.go:141] libmachine: (newest-cni-875270) Ensuring network mk-newest-cni-875270 is active
	I0717 19:00:44.575097   87211 main.go:141] libmachine: (newest-cni-875270) Getting domain xml...
	I0717 19:00:44.575849   87211 main.go:141] libmachine: (newest-cni-875270) Creating domain...
	I0717 19:00:45.828532   87211 main.go:141] libmachine: (newest-cni-875270) Waiting to get IP...
	I0717 19:00:45.829469   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:45.829885   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:45.829912   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:45.829866   87234 retry.go:31] will retry after 263.920251ms: waiting for machine to come up
	I0717 19:00:46.095335   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:46.095954   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:46.095981   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:46.095911   87234 retry.go:31] will retry after 363.178186ms: waiting for machine to come up
	I0717 19:00:46.460512   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:46.460977   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:46.461012   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:46.460928   87234 retry.go:31] will retry after 409.665021ms: waiting for machine to come up
	I0717 19:00:46.872744   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:46.873247   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:46.873273   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:46.873205   87234 retry.go:31] will retry after 563.902745ms: waiting for machine to come up
	I0717 19:00:47.439068   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:47.439656   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:47.439683   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:47.439604   87234 retry.go:31] will retry after 733.359581ms: waiting for machine to come up
	I0717 19:00:48.174089   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:48.174581   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:48.174605   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:48.174521   87234 retry.go:31] will retry after 942.690499ms: waiting for machine to come up
	I0717 19:00:49.119131   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:49.119532   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:49.119562   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:49.119511   87234 retry.go:31] will retry after 1.141544671s: waiting for machine to come up
	I0717 19:00:50.262357   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:50.262777   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:50.262801   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:50.262738   87234 retry.go:31] will retry after 1.467163596s: waiting for machine to come up
	I0717 19:00:51.731003   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:51.731354   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:51.731376   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:51.731308   87234 retry.go:31] will retry after 1.199886437s: waiting for machine to come up
	I0717 19:00:52.932457   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:52.933110   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:52.933139   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:52.933070   87234 retry.go:31] will retry after 1.540490534s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.047832565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242855047794056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=793ecde3-5736-47ca-a7d0-d513b6b243ac name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.048588206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ad669e4-5bd1-419d-b3b5-f6ca72ac6467 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.048765191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ad669e4-5bd1-419d-b3b5-f6ca72ac6467 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.049074239Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:219119fb42a606572c48ffd89d1db8c75d28f283757c8a3aceebcd1547002903,PodSandboxId:0712ba80efc2eeb4c0f7a4de9f9313bf552e435868cba09fe7e1e97faec06ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921506788534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-r9xns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29624b73-848d-4a35-96bc-92f9627842fe,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00db5d50aef9cac73ee3b0add4694bea89c8599c4239a5b742f76e0ad78b95b,PodSandboxId:b52438192d162323e79e91ecdf9a9388dfd4d1f64d74eee93274b3dce06e84b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921485923660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tx7nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 085ec394-1ca7-4b9b-9b54-b4fdab45bd75,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:412fe67a8c48127d6c17bfe9b629a684a421319e0d6df01e28e0cedc335b5b09,PodSandboxId:2e11af8b33d3f8a8f973acacc5e1704033b24b19c96a5535e1d901ca5d6d196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721241921060595751,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9730cf9-c0f1-4afc-94cc-cbd825158d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b6df40c85430006c2c744f618ad3022fb30f55be4f098adf069ae9a98e12db,PodSandboxId:ae6bd4e20bf24dd17924b7cbf69ea6fbac7c95bb15c90afc31ec91dbde1e8d39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721241919831588645,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaedb8f-b248-43ac-bd49-4f97d26aa1f6,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ebbc1444f5b55d1384bdec39e384901df7910fa5870cabef06fb9ae0d5804e,PodSandboxId:7ae8c4f26db78059b078cd1f618cd5ffaf77045de4d1bf3fd277a37153cc9672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721241909297118556,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c87a1116618d10cb40a78d0612b9d76a561cf9ad929a91228b68060259248098,PodSandboxId:28dcf525284313916749748cc137ac8a57ed031581a2e7c23485716f22bc769a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721241909219163696,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83db3e8a25d12043a2cc2b56a7a5959d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd3621d46cb4fa80bda24f313af1c43f26c07ceedfabfd7100a19a1d3c1b5ed,PodSandboxId:c41eeadad57288abf631f8a21d4c71283cf357404258655478ba36f54a1a7586,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721241909188988683,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b73ab2d9631fbbc6ef1f1e2293feaa,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f70542191feea896610d6dce3b898ef2a6258658ba06942c54e5fb1c8673788,PodSandboxId:5e139ace02f9f4c0e79bd5548ec0251e73c7a31dd5b70d024427a2a1afe0f6d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721241909150221436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab561c20ed9012c8eacc8441045039ea,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0d5f05d9fd479b650215b11c498b3356e4b5d8ba877723e612d3fb09c5675b0,PodSandboxId:33718ca4f01d9d6ddc6b368199f27792f545d366e1d33cac1f0f4b78841c2c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721241621567533537,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ad669e4-5bd1-419d-b3b5-f6ca72ac6467 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.104605880Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1d8b2c2-3ff1-4a82-af18-83717737e6f6 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.104761886Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1d8b2c2-3ff1-4a82-af18-83717737e6f6 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.105893563Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a4a8fe35-5ae8-4ef5-92a3-ebbc7496fa74 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.106402898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242855106374478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a4a8fe35-5ae8-4ef5-92a3-ebbc7496fa74 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.106971482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c09a58f-d1fd-4fa9-86fa-594223b6b3cf name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.107023106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c09a58f-d1fd-4fa9-86fa-594223b6b3cf name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.107217532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:219119fb42a606572c48ffd89d1db8c75d28f283757c8a3aceebcd1547002903,PodSandboxId:0712ba80efc2eeb4c0f7a4de9f9313bf552e435868cba09fe7e1e97faec06ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921506788534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-r9xns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29624b73-848d-4a35-96bc-92f9627842fe,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00db5d50aef9cac73ee3b0add4694bea89c8599c4239a5b742f76e0ad78b95b,PodSandboxId:b52438192d162323e79e91ecdf9a9388dfd4d1f64d74eee93274b3dce06e84b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921485923660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tx7nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 085ec394-1ca7-4b9b-9b54-b4fdab45bd75,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:412fe67a8c48127d6c17bfe9b629a684a421319e0d6df01e28e0cedc335b5b09,PodSandboxId:2e11af8b33d3f8a8f973acacc5e1704033b24b19c96a5535e1d901ca5d6d196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721241921060595751,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9730cf9-c0f1-4afc-94cc-cbd825158d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b6df40c85430006c2c744f618ad3022fb30f55be4f098adf069ae9a98e12db,PodSandboxId:ae6bd4e20bf24dd17924b7cbf69ea6fbac7c95bb15c90afc31ec91dbde1e8d39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721241919831588645,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaedb8f-b248-43ac-bd49-4f97d26aa1f6,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ebbc1444f5b55d1384bdec39e384901df7910fa5870cabef06fb9ae0d5804e,PodSandboxId:7ae8c4f26db78059b078cd1f618cd5ffaf77045de4d1bf3fd277a37153cc9672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721241909297118556,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c87a1116618d10cb40a78d0612b9d76a561cf9ad929a91228b68060259248098,PodSandboxId:28dcf525284313916749748cc137ac8a57ed031581a2e7c23485716f22bc769a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721241909219163696,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83db3e8a25d12043a2cc2b56a7a5959d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd3621d46cb4fa80bda24f313af1c43f26c07ceedfabfd7100a19a1d3c1b5ed,PodSandboxId:c41eeadad57288abf631f8a21d4c71283cf357404258655478ba36f54a1a7586,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721241909188988683,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b73ab2d9631fbbc6ef1f1e2293feaa,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f70542191feea896610d6dce3b898ef2a6258658ba06942c54e5fb1c8673788,PodSandboxId:5e139ace02f9f4c0e79bd5548ec0251e73c7a31dd5b70d024427a2a1afe0f6d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721241909150221436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab561c20ed9012c8eacc8441045039ea,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0d5f05d9fd479b650215b11c498b3356e4b5d8ba877723e612d3fb09c5675b0,PodSandboxId:33718ca4f01d9d6ddc6b368199f27792f545d366e1d33cac1f0f4b78841c2c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721241621567533537,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c09a58f-d1fd-4fa9-86fa-594223b6b3cf name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.157212143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd30e908-0141-4ffc-8601-12c48c14a32b name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.157333893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd30e908-0141-4ffc-8601-12c48c14a32b name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.158770140Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=959bb0e8-755e-446a-9f57-93bb4954a219 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.159133369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242855159099001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=959bb0e8-755e-446a-9f57-93bb4954a219 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.159781621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=747227ee-7803-46a5-8417-84234916afba name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.159862415Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=747227ee-7803-46a5-8417-84234916afba name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.160099716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:219119fb42a606572c48ffd89d1db8c75d28f283757c8a3aceebcd1547002903,PodSandboxId:0712ba80efc2eeb4c0f7a4de9f9313bf552e435868cba09fe7e1e97faec06ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921506788534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-r9xns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29624b73-848d-4a35-96bc-92f9627842fe,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00db5d50aef9cac73ee3b0add4694bea89c8599c4239a5b742f76e0ad78b95b,PodSandboxId:b52438192d162323e79e91ecdf9a9388dfd4d1f64d74eee93274b3dce06e84b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921485923660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tx7nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 085ec394-1ca7-4b9b-9b54-b4fdab45bd75,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:412fe67a8c48127d6c17bfe9b629a684a421319e0d6df01e28e0cedc335b5b09,PodSandboxId:2e11af8b33d3f8a8f973acacc5e1704033b24b19c96a5535e1d901ca5d6d196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721241921060595751,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9730cf9-c0f1-4afc-94cc-cbd825158d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b6df40c85430006c2c744f618ad3022fb30f55be4f098adf069ae9a98e12db,PodSandboxId:ae6bd4e20bf24dd17924b7cbf69ea6fbac7c95bb15c90afc31ec91dbde1e8d39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721241919831588645,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaedb8f-b248-43ac-bd49-4f97d26aa1f6,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ebbc1444f5b55d1384bdec39e384901df7910fa5870cabef06fb9ae0d5804e,PodSandboxId:7ae8c4f26db78059b078cd1f618cd5ffaf77045de4d1bf3fd277a37153cc9672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721241909297118556,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c87a1116618d10cb40a78d0612b9d76a561cf9ad929a91228b68060259248098,PodSandboxId:28dcf525284313916749748cc137ac8a57ed031581a2e7c23485716f22bc769a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721241909219163696,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83db3e8a25d12043a2cc2b56a7a5959d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd3621d46cb4fa80bda24f313af1c43f26c07ceedfabfd7100a19a1d3c1b5ed,PodSandboxId:c41eeadad57288abf631f8a21d4c71283cf357404258655478ba36f54a1a7586,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721241909188988683,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b73ab2d9631fbbc6ef1f1e2293feaa,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f70542191feea896610d6dce3b898ef2a6258658ba06942c54e5fb1c8673788,PodSandboxId:5e139ace02f9f4c0e79bd5548ec0251e73c7a31dd5b70d024427a2a1afe0f6d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721241909150221436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab561c20ed9012c8eacc8441045039ea,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0d5f05d9fd479b650215b11c498b3356e4b5d8ba877723e612d3fb09c5675b0,PodSandboxId:33718ca4f01d9d6ddc6b368199f27792f545d366e1d33cac1f0f4b78841c2c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721241621567533537,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=747227ee-7803-46a5-8417-84234916afba name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.193064749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b6a6ffd-ccf4-4db8-ad61-16fe14c08de0 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.193147515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b6a6ffd-ccf4-4db8-ad61-16fe14c08de0 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.194371298Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=876c90c7-68b3-48c0-95fc-26eed0edf8e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.194765190Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242855194743116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=876c90c7-68b3-48c0-95fc-26eed0edf8e2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.195265855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67202e6a-4c67-418e-9726-ae74616cf896 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.195317209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67202e6a-4c67-418e-9726-ae74616cf896 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:55 no-preload-066175 crio[725]: time="2024-07-17 19:00:55.195515129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:219119fb42a606572c48ffd89d1db8c75d28f283757c8a3aceebcd1547002903,PodSandboxId:0712ba80efc2eeb4c0f7a4de9f9313bf552e435868cba09fe7e1e97faec06ab1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921506788534,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-r9xns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29624b73-848d-4a35-96bc-92f9627842fe,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b00db5d50aef9cac73ee3b0add4694bea89c8599c4239a5b742f76e0ad78b95b,PodSandboxId:b52438192d162323e79e91ecdf9a9388dfd4d1f64d74eee93274b3dce06e84b9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241921485923660,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-tx7nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 085ec394-1ca7-4b9b-9b54-b4fdab45bd75,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:412fe67a8c48127d6c17bfe9b629a684a421319e0d6df01e28e0cedc335b5b09,PodSandboxId:2e11af8b33d3f8a8f973acacc5e1704033b24b19c96a5535e1d901ca5d6d196b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1721241921060595751,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9730cf9-c0f1-4afc-94cc-cbd825158d7c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8b6df40c85430006c2c744f618ad3022fb30f55be4f098adf069ae9a98e12db,PodSandboxId:ae6bd4e20bf24dd17924b7cbf69ea6fbac7c95bb15c90afc31ec91dbde1e8d39,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:
1721241919831588645,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rgp5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aaedb8f-b248-43ac-bd49-4f97d26aa1f6,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ebbc1444f5b55d1384bdec39e384901df7910fa5870cabef06fb9ae0d5804e,PodSandboxId:7ae8c4f26db78059b078cd1f618cd5ffaf77045de4d1bf3fd277a37153cc9672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721241909297118556,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c87a1116618d10cb40a78d0612b9d76a561cf9ad929a91228b68060259248098,PodSandboxId:28dcf525284313916749748cc137ac8a57ed031581a2e7c23485716f22bc769a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1721241909219163696,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83db3e8a25d12043a2cc2b56a7a5959d,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dd3621d46cb4fa80bda24f313af1c43f26c07ceedfabfd7100a19a1d3c1b5ed,PodSandboxId:c41eeadad57288abf631f8a21d4c71283cf357404258655478ba36f54a1a7586,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721241909188988683,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85b73ab2d9631fbbc6ef1f1e2293feaa,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f70542191feea896610d6dce3b898ef2a6258658ba06942c54e5fb1c8673788,PodSandboxId:5e139ace02f9f4c0e79bd5548ec0251e73c7a31dd5b70d024427a2a1afe0f6d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1721241909150221436,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab561c20ed9012c8eacc8441045039ea,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0d5f05d9fd479b650215b11c498b3356e4b5d8ba877723e612d3fb09c5675b0,PodSandboxId:33718ca4f01d9d6ddc6b368199f27792f545d366e1d33cac1f0f4b78841c2c5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721241621567533537,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-066175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 568a24fdb251fb4f02d77cd5aa7a2257,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67202e6a-4c67-418e-9726-ae74616cf896 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	219119fb42a60       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   0712ba80efc2e       coredns-5cfdc65f69-r9xns
	b00db5d50aef9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   b52438192d162       coredns-5cfdc65f69-tx7nc
	412fe67a8c481       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   2e11af8b33d3f       storage-provisioner
	f8b6df40c8543       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   15 minutes ago      Running             kube-proxy                0                   ae6bd4e20bf24       kube-proxy-rgp5c
	35ebbc1444f5b       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   15 minutes ago      Running             kube-apiserver            2                   7ae8c4f26db78       kube-apiserver-no-preload-066175
	c87a1116618d1       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   15 minutes ago      Running             kube-controller-manager   2                   28dcf52528431       kube-controller-manager-no-preload-066175
	2dd3621d46cb4       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   15 minutes ago      Running             etcd                      2                   c41eeadad5728       etcd-no-preload-066175
	8f70542191fee       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   15 minutes ago      Running             kube-scheduler            2                   5e139ace02f9f       kube-scheduler-no-preload-066175
	b0d5f05d9fd47       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   20 minutes ago      Exited              kube-apiserver            1                   33718ca4f01d9       kube-apiserver-no-preload-066175
	
	
	==> coredns [219119fb42a606572c48ffd89d1db8c75d28f283757c8a3aceebcd1547002903] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b00db5d50aef9cac73ee3b0add4694bea89c8599c4239a5b742f76e0ad78b95b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-066175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-066175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=no-preload-066175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_45_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:45:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-066175
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 19:00:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 19:00:41 +0000   Wed, 17 Jul 2024 18:45:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 19:00:41 +0000   Wed, 17 Jul 2024 18:45:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 19:00:41 +0000   Wed, 17 Jul 2024 18:45:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 19:00:41 +0000   Wed, 17 Jul 2024 18:45:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.216
	  Hostname:    no-preload-066175
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b465df7f5fd4451a211c0080bed4e39
	  System UUID:                5b465df7-f5fd-4451-a211-c0080bed4e39
	  Boot ID:                    ef1cf6fc-b36c-433e-8163-9cbb9e5eb3df
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-r9xns                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-5cfdc65f69-tx7nc                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-no-preload-066175                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-066175             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-066175    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-rgp5c                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-no-preload-066175             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-78fcd8795b-kj29z              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node no-preload-066175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node no-preload-066175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node no-preload-066175 status is now: NodeHasSufficientPID
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node no-preload-066175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node no-preload-066175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node no-preload-066175 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node no-preload-066175 event: Registered Node no-preload-066175 in Controller
	
	
	==> dmesg <==
	[  +0.036103] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.419250] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.679395] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.518588] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul17 18:40] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.059807] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053224] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.184103] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.115805] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.279970] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[ +14.546242] systemd-fstab-generator[1174]: Ignoring "noauto" option for root device
	[  +0.072059] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.577342] systemd-fstab-generator[1296]: Ignoring "noauto" option for root device
	[  +5.211006] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.269111] kauditd_printk_skb: 53 callbacks suppressed
	[  +5.077301] kauditd_printk_skb: 25 callbacks suppressed
	[Jul17 18:45] systemd-fstab-generator[2943]: Ignoring "noauto" option for root device
	[  +0.063095] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.003910] systemd-fstab-generator[3265]: Ignoring "noauto" option for root device
	[  +0.084961] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.263596] systemd-fstab-generator[3377]: Ignoring "noauto" option for root device
	[  +0.097525] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.073978] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [2dd3621d46cb4fa80bda24f313af1c43f26c07ceedfabfd7100a19a1d3c1b5ed] <==
	{"level":"info","ts":"2024-07-17T18:45:09.946881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14ddabf6165d7543 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T18:45:09.946922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14ddabf6165d7543 received MsgPreVoteResp from 14ddabf6165d7543 at term 1"}
	{"level":"info","ts":"2024-07-17T18:45:09.946992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14ddabf6165d7543 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:09.947017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14ddabf6165d7543 received MsgVoteResp from 14ddabf6165d7543 at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:09.947082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"14ddabf6165d7543 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:09.947108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 14ddabf6165d7543 elected leader 14ddabf6165d7543 at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:09.951859Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:09.952315Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"14ddabf6165d7543","local-member-attributes":"{Name:no-preload-066175 ClientURLs:[https://192.168.72.216:2379]}","request-path":"/0/members/14ddabf6165d7543/attributes","cluster-id":"d730758011f9da75","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:45:09.95267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:45:09.953102Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:45:09.955981Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T18:45:09.960899Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.216:2379"}
	{"level":"info","ts":"2024-07-17T18:45:09.961442Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-17T18:45:09.96419Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T18:45:09.964542Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d730758011f9da75","local-member-id":"14ddabf6165d7543","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:09.968696Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:09.96877Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:09.971676Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:45:09.971706Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T18:55:10.243183Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":723}
	{"level":"info","ts":"2024-07-17T18:55:10.251778Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":723,"took":"8.193797ms","hash":2027258298,"current-db-size-bytes":2252800,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2252800,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-17T18:55:10.251848Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2027258298,"revision":723,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T19:00:10.252354Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":967}
	{"level":"info","ts":"2024-07-17T19:00:10.256273Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":967,"took":"3.311049ms","hash":579088796,"current-db-size-bytes":2252800,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1585152,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-17T19:00:10.256349Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":579088796,"revision":967,"compact-revision":723}
	
	
	==> kernel <==
	 19:00:55 up 21 min,  0 users,  load average: 0.15, 0.12, 0.09
	Linux no-preload-066175 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [35ebbc1444f5b55d1384bdec39e384901df7910fa5870cabef06fb9ae0d5804e] <==
	I0717 18:56:12.638337       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 18:56:12.638413       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:58:12.639034       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 18:58:12.639121       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0717 18:58:12.639205       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 18:58:12.639254       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 18:58:12.640363       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 18:58:12.640438       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:00:11.640454       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 19:00:11.640663       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0717 19:00:12.642774       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 19:00:12.643051       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0717 19:00:12.642985       1 handler_proxy.go:99] no RequestInfo found in the context
	E0717 19:00:12.643250       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0717 19:00:12.644226       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 19:00:12.644342       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [b0d5f05d9fd479b650215b11c498b3356e4b5d8ba877723e612d3fb09c5675b0] <==
	W0717 18:45:01.696011       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.697396       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.703789       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.728238       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.756119       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.798300       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.806153       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:01.966431       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.118720       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.143938       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.148236       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.162904       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.284148       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.315010       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.365832       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.488534       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.622921       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:02.714564       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.149796       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.235916       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.351231       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.356058       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.472422       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.513947       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:06.703000       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c87a1116618d10cb40a78d0612b9d76a561cf9ad929a91228b68060259248098] <==
	E0717 18:55:49.576211       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:55:49.665406       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 18:56:08.512576       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="191.358µs"
	I0717 18:56:19.505861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="85.94µs"
	E0717 18:56:19.583018       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:56:19.680324       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:56:49.589405       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:56:49.687515       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:57:19.595957       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:57:19.695096       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:57:49.602800       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:57:49.703081       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:58:19.611774       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:58:19.718675       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:58:49.619023       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:58:49.728092       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:59:19.626001       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:59:19.736370       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:59:49.633793       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 18:59:49.745158       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:00:19.640926       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:00:19.757459       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 19:00:41.566013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-066175"
	E0717 19:00:49.648198       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0717 19:00:49.765216       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f8b6df40c85430006c2c744f618ad3022fb30f55be4f098adf069ae9a98e12db] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0717 18:45:20.280767       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0717 18:45:20.290440       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.72.216"]
	E0717 18:45:20.290516       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0717 18:45:20.369075       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0717 18:45:20.369117       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:45:20.369152       1 server_linux.go:170] "Using iptables Proxier"
	I0717 18:45:20.372893       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0717 18:45:20.373126       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0717 18:45:20.373150       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:45:20.374721       1 config.go:197] "Starting service config controller"
	I0717 18:45:20.374745       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:45:20.374765       1 config.go:104] "Starting endpoint slice config controller"
	I0717 18:45:20.374769       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:45:20.375373       1 config.go:326] "Starting node config controller"
	I0717 18:45:20.375399       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:45:20.474876       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:45:20.474946       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:45:20.476382       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8f70542191feea896610d6dce3b898ef2a6258658ba06942c54e5fb1c8673788] <==
	W0717 18:45:11.671430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:45:11.671464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:11.672083       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:45:11.672126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.512357       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 18:45:12.512409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.557583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:45:12.557678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.608394       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:45:12.608461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.738879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:45:12.738997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.838597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:45:12.838678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.938070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:45:12.938125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.938274       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:45:12.938306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.942146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:45:12.942199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:12.946781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:45:12.946877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0717 18:45:13.131445       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:45:13.131520       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0717 18:45:15.039226       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 18:58:14 no-preload-066175 kubelet[3272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:58:14 no-preload-066175 kubelet[3272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:58:14 no-preload-066175 kubelet[3272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:58:21 no-preload-066175 kubelet[3272]: E0717 18:58:21.493480    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:58:36 no-preload-066175 kubelet[3272]: E0717 18:58:36.495246    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:58:48 no-preload-066175 kubelet[3272]: E0717 18:58:48.492529    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:59:03 no-preload-066175 kubelet[3272]: E0717 18:59:03.492806    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:59:14 no-preload-066175 kubelet[3272]: E0717 18:59:14.530830    3272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 18:59:14 no-preload-066175 kubelet[3272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:59:14 no-preload-066175 kubelet[3272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:59:14 no-preload-066175 kubelet[3272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:59:14 no-preload-066175 kubelet[3272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:59:18 no-preload-066175 kubelet[3272]: E0717 18:59:18.494660    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:59:31 no-preload-066175 kubelet[3272]: E0717 18:59:31.492841    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:59:43 no-preload-066175 kubelet[3272]: E0717 18:59:43.492537    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 18:59:57 no-preload-066175 kubelet[3272]: E0717 18:59:57.492679    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 19:00:08 no-preload-066175 kubelet[3272]: E0717 19:00:08.493265    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 19:00:14 no-preload-066175 kubelet[3272]: E0717 19:00:14.531920    3272 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:00:14 no-preload-066175 kubelet[3272]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:00:14 no-preload-066175 kubelet[3272]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:00:14 no-preload-066175 kubelet[3272]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:00:14 no-preload-066175 kubelet[3272]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:00:23 no-preload-066175 kubelet[3272]: E0717 19:00:23.492465    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 19:00:35 no-preload-066175 kubelet[3272]: E0717 19:00:35.492592    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	Jul 17 19:00:47 no-preload-066175 kubelet[3272]: E0717 19:00:47.492217    3272 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-kj29z" podUID="4b99bc9f-b5a7-4e86-b3ba-2607f9840957"
	
	
	==> storage-provisioner [412fe67a8c48127d6c17bfe9b629a684a421319e0d6df01e28e0cedc335b5b09] <==
	I0717 18:45:21.196031       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:45:21.215021       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:45:21.215081       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:45:21.237919       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:45:21.238072       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-066175_72746047-1445-4b3c-b5b6-a3e5e3f7b418!
	I0717 18:45:21.239184       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c2af562a-0d45-40f0-ba2d-7c284b454a5b", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-066175_72746047-1445-4b3c-b5b6-a3e5e3f7b418 became leader
	I0717 18:45:21.338978       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-066175_72746047-1445-4b3c-b5b6-a3e5e3f7b418!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-066175 -n no-preload-066175
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-066175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-kj29z
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-066175 describe pod metrics-server-78fcd8795b-kj29z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-066175 describe pod metrics-server-78fcd8795b-kj29z: exit status 1 (59.951138ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-kj29z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-066175 describe pod metrics-server-78fcd8795b-kj29z: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (378.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (474.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-022930 -n default-k8s-diff-port-022930
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-17 19:02:55.830146293 +0000 UTC m=+6690.559342255
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-022930 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-022930 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.756µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-022930 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-022930 -n default-k8s-diff-port-022930
E0717 19:02:55.855614   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-022930 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-022930 logs -n 25: (1.099341103s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-022930  | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC | 17 Jul 24 18:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-527415                 | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-019549        | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-066175                  | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-066175 --memory=2200                     | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:45 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-019549             | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-022930       | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC | 17 Jul 24 18:45 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 19:00 UTC | 17 Jul 24 19:00 UTC |
	| start   | -p newest-cni-875270 --memory=2200 --alsologtostderr   | newest-cni-875270            | jenkins | v1.33.1 | 17 Jul 24 19:00 UTC | 17 Jul 24 19:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-066175                                   | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 19:00 UTC | 17 Jul 24 19:00 UTC |
	| delete  | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 19:01 UTC | 17 Jul 24 19:01 UTC |
	| addons  | enable metrics-server -p newest-cni-875270             | newest-cni-875270            | jenkins | v1.33.1 | 17 Jul 24 19:01 UTC | 17 Jul 24 19:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-875270                                   | newest-cni-875270            | jenkins | v1.33.1 | 17 Jul 24 19:01 UTC | 17 Jul 24 19:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-875270                  | newest-cni-875270            | jenkins | v1.33.1 | 17 Jul 24 19:01 UTC | 17 Jul 24 19:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-875270 --memory=2200 --alsologtostderr   | newest-cni-875270            | jenkins | v1.33.1 | 17 Jul 24 19:01 UTC | 17 Jul 24 19:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| image   | newest-cni-875270 image list                           | newest-cni-875270            | jenkins | v1.33.1 | 17 Jul 24 19:02 UTC | 17 Jul 24 19:02 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-875270                                   | newest-cni-875270            | jenkins | v1.33.1 | 17 Jul 24 19:02 UTC | 17 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-875270                                   | newest-cni-875270            | jenkins | v1.33.1 | 17 Jul 24 19:02 UTC | 17 Jul 24 19:02 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-875270                                   | newest-cni-875270            | jenkins | v1.33.1 | 17 Jul 24 19:02 UTC | 17 Jul 24 19:02 UTC |
	| delete  | -p newest-cni-875270                                   | newest-cni-875270            | jenkins | v1.33.1 | 17 Jul 24 19:02 UTC | 17 Jul 24 19:02 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:01:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:01:39.148602   88123 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:01:39.148881   88123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:01:39.148896   88123 out.go:304] Setting ErrFile to fd 2...
	I0717 19:01:39.148902   88123 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:01:39.149128   88123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 19:01:39.149615   88123 out.go:298] Setting JSON to false
	I0717 19:01:39.150519   88123 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9842,"bootTime":1721233057,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:01:39.150571   88123 start.go:139] virtualization: kvm guest
	I0717 19:01:39.152610   88123 out.go:177] * [newest-cni-875270] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:01:39.154008   88123 notify.go:220] Checking for updates...
	I0717 19:01:39.154040   88123 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 19:01:39.155251   88123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:01:39.156514   88123 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 19:01:39.157736   88123 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 19:01:39.158710   88123 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:01:39.159709   88123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:01:39.161127   88123 config.go:182] Loaded profile config "newest-cni-875270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:01:39.161570   88123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:01:39.161643   88123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:01:39.175734   88123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36117
	I0717 19:01:39.176084   88123 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:01:39.176615   88123 main.go:141] libmachine: Using API Version  1
	I0717 19:01:39.176637   88123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:01:39.177003   88123 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:01:39.177206   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:39.177451   88123 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:01:39.177741   88123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:01:39.177774   88123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:01:39.191576   88123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0717 19:01:39.191939   88123 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:01:39.192315   88123 main.go:141] libmachine: Using API Version  1
	I0717 19:01:39.192329   88123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:01:39.192629   88123 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:01:39.192782   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:39.226033   88123 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 19:01:39.227230   88123 start.go:297] selected driver: kvm2
	I0717 19:01:39.227251   88123 start.go:901] validating driver "kvm2" against &{Name:newest-cni-875270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-875270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:01:39.227359   88123 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:01:39.227980   88123 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:01:39.228050   88123 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:01:39.241827   88123 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:01:39.242184   88123 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 19:01:39.242252   88123 cni.go:84] Creating CNI manager for ""
	I0717 19:01:39.242266   88123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:01:39.242324   88123 start.go:340] cluster config:
	{Name:newest-cni-875270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-875270 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:01:39.242422   88123 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:01:39.244008   88123 out.go:177] * Starting "newest-cni-875270" primary control-plane node in "newest-cni-875270" cluster
	I0717 19:01:39.245141   88123 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:01:39.245176   88123 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:01:39.245186   88123 cache.go:56] Caching tarball of preloaded images
	I0717 19:01:39.245250   88123 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:01:39.245260   88123 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 19:01:39.245365   88123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/config.json ...
	I0717 19:01:39.245533   88123 start.go:360] acquireMachinesLock for newest-cni-875270: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:01:39.245578   88123 start.go:364] duration metric: took 26.19µs to acquireMachinesLock for "newest-cni-875270"
	I0717 19:01:39.245590   88123 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:01:39.245597   88123 fix.go:54] fixHost starting: 
	I0717 19:01:39.245830   88123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:01:39.245864   88123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:01:39.259343   88123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0717 19:01:39.259724   88123 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:01:39.260105   88123 main.go:141] libmachine: Using API Version  1
	I0717 19:01:39.260125   88123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:01:39.260425   88123 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:01:39.260644   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:39.260772   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetState
	I0717 19:01:39.262412   88123 fix.go:112] recreateIfNeeded on newest-cni-875270: state=Stopped err=<nil>
	I0717 19:01:39.262441   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	W0717 19:01:39.262580   88123 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 19:01:39.264504   88123 out.go:177] * Restarting existing kvm2 VM for "newest-cni-875270" ...
	I0717 19:01:39.265669   88123 main.go:141] libmachine: (newest-cni-875270) Calling .Start
	I0717 19:01:39.265836   88123 main.go:141] libmachine: (newest-cni-875270) Ensuring networks are active...
	I0717 19:01:39.266532   88123 main.go:141] libmachine: (newest-cni-875270) Ensuring network default is active
	I0717 19:01:39.266911   88123 main.go:141] libmachine: (newest-cni-875270) Ensuring network mk-newest-cni-875270 is active
	I0717 19:01:39.267258   88123 main.go:141] libmachine: (newest-cni-875270) Getting domain xml...
	I0717 19:01:39.267939   88123 main.go:141] libmachine: (newest-cni-875270) Creating domain...
	I0717 19:01:40.458179   88123 main.go:141] libmachine: (newest-cni-875270) Waiting to get IP...
	I0717 19:01:40.459030   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:40.459436   88123 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:01:40.459476   88123 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:01:40.459406   88158 retry.go:31] will retry after 194.1988ms: waiting for machine to come up
	I0717 19:01:40.654736   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:40.655186   88123 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:01:40.655213   88123 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:01:40.655124   88158 retry.go:31] will retry after 277.650252ms: waiting for machine to come up
	I0717 19:01:40.934390   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:40.935022   88123 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:01:40.935054   88123 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:01:40.934975   88158 retry.go:31] will retry after 358.133514ms: waiting for machine to come up
	I0717 19:01:41.294242   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:41.294797   88123 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:01:41.294819   88123 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:01:41.294753   88158 retry.go:31] will retry after 416.026818ms: waiting for machine to come up
	I0717 19:01:41.712380   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:41.712826   88123 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:01:41.712847   88123 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:01:41.712784   88158 retry.go:31] will retry after 597.116077ms: waiting for machine to come up
	I0717 19:01:42.311234   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:42.311684   88123 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:01:42.311710   88123 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:01:42.311632   88158 retry.go:31] will retry after 885.345271ms: waiting for machine to come up
	I0717 19:01:43.198752   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:43.199352   88123 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:01:43.199385   88123 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:01:43.199289   88158 retry.go:31] will retry after 770.635325ms: waiting for machine to come up
	I0717 19:01:43.971229   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:43.971594   88123 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:01:43.971616   88123 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:01:43.971566   88158 retry.go:31] will retry after 1.198110813s: waiting for machine to come up
	I0717 19:01:45.171563   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:45.172101   88123 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:01:45.172126   88123 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:01:45.172053   88158 retry.go:31] will retry after 1.243370637s: waiting for machine to come up
	I0717 19:01:46.417441   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:46.417894   88123 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:01:46.417934   88123 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:01:46.417843   88158 retry.go:31] will retry after 1.68826334s: waiting for machine to come up
	I0717 19:01:48.108609   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:48.109086   88123 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:01:48.109110   88123 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:01:48.109047   88158 retry.go:31] will retry after 2.228713151s: waiting for machine to come up
	I0717 19:01:50.339176   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:50.339549   88123 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:01:50.339578   88123 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:01:50.339509   88158 retry.go:31] will retry after 2.293739223s: waiting for machine to come up
	I0717 19:01:52.634253   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:52.634592   88123 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:01:52.634614   88123 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:01:52.634551   88158 retry.go:31] will retry after 2.803363505s: waiting for machine to come up
	I0717 19:01:55.439942   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.440368   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has current primary IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.440386   88123 main.go:141] libmachine: (newest-cni-875270) Found IP for machine: 192.168.39.225
	I0717 19:01:55.440406   88123 main.go:141] libmachine: (newest-cni-875270) Reserving static IP address...
	I0717 19:01:55.440777   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "newest-cni-875270", mac: "52:54:00:2d:7e:1a", ip: "192.168.39.225"} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:55.440812   88123 main.go:141] libmachine: (newest-cni-875270) Reserved static IP address: 192.168.39.225
	I0717 19:01:55.440827   88123 main.go:141] libmachine: (newest-cni-875270) DBG | skip adding static IP to network mk-newest-cni-875270 - found existing host DHCP lease matching {name: "newest-cni-875270", mac: "52:54:00:2d:7e:1a", ip: "192.168.39.225"}
	I0717 19:01:55.440840   88123 main.go:141] libmachine: (newest-cni-875270) DBG | Getting to WaitForSSH function...
	I0717 19:01:55.440856   88123 main.go:141] libmachine: (newest-cni-875270) Waiting for SSH to be available...
	I0717 19:01:55.443068   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.443440   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:55.443468   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.443567   88123 main.go:141] libmachine: (newest-cni-875270) DBG | Using SSH client type: external
	I0717 19:01:55.443590   88123 main.go:141] libmachine: (newest-cni-875270) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa (-rw-------)
	I0717 19:01:55.443632   88123 main.go:141] libmachine: (newest-cni-875270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:01:55.443646   88123 main.go:141] libmachine: (newest-cni-875270) DBG | About to run SSH command:
	I0717 19:01:55.443658   88123 main.go:141] libmachine: (newest-cni-875270) DBG | exit 0
	I0717 19:01:55.564972   88123 main.go:141] libmachine: (newest-cni-875270) DBG | SSH cmd err, output: <nil>: 
	I0717 19:01:55.565373   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetConfigRaw
	I0717 19:01:55.566015   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetIP
	I0717 19:01:55.568304   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.568663   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:55.568687   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.568882   88123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/config.json ...
	I0717 19:01:55.569114   88123 machine.go:94] provisionDockerMachine start ...
	I0717 19:01:55.569134   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:55.569337   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:55.571720   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.572026   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:55.572051   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.572185   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:55.572348   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:55.572494   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:55.572632   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:55.572767   88123 main.go:141] libmachine: Using SSH client type: native
	I0717 19:01:55.572977   88123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0717 19:01:55.572989   88123 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 19:01:55.668847   88123 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 19:01:55.668877   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetMachineName
	I0717 19:01:55.669135   88123 buildroot.go:166] provisioning hostname "newest-cni-875270"
	I0717 19:01:55.669164   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetMachineName
	I0717 19:01:55.669348   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:55.672398   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.672788   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:55.672810   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.672955   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:55.673114   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:55.673263   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:55.673376   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:55.673528   88123 main.go:141] libmachine: Using SSH client type: native
	I0717 19:01:55.673688   88123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0717 19:01:55.673701   88123 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-875270 && echo "newest-cni-875270" | sudo tee /etc/hostname
	I0717 19:01:55.782005   88123 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-875270
	
	I0717 19:01:55.782035   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:55.784907   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.785267   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:55.785289   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.785530   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:55.785746   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:55.785911   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:55.786073   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:55.786257   88123 main.go:141] libmachine: Using SSH client type: native
	I0717 19:01:55.786485   88123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0717 19:01:55.786504   88123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-875270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-875270/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-875270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:01:55.892220   88123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:01:55.892250   88123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 19:01:55.892288   88123 buildroot.go:174] setting up certificates
	I0717 19:01:55.892307   88123 provision.go:84] configureAuth start
	I0717 19:01:55.892324   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetMachineName
	I0717 19:01:55.892617   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetIP
	I0717 19:01:55.895271   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.895609   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:55.895648   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.895762   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:55.897990   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.898298   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:55.898332   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:55.898449   88123 provision.go:143] copyHostCerts
	I0717 19:01:55.898522   88123 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 19:01:55.898539   88123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 19:01:55.898618   88123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 19:01:55.898749   88123 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 19:01:55.898761   88123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 19:01:55.898807   88123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 19:01:55.898894   88123 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 19:01:55.898904   88123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 19:01:55.898936   88123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 19:01:55.899016   88123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.newest-cni-875270 san=[127.0.0.1 192.168.39.225 localhost minikube newest-cni-875270]
	I0717 19:01:56.027622   88123 provision.go:177] copyRemoteCerts
	I0717 19:01:56.027682   88123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:01:56.027714   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:56.030180   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.030458   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:56.030484   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.030638   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:56.030792   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:56.030939   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:56.031126   88123 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa Username:docker}
	I0717 19:01:56.110902   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:01:56.136918   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:01:56.158420   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 19:01:56.179536   88123 provision.go:87] duration metric: took 287.216917ms to configureAuth
	I0717 19:01:56.179556   88123 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:01:56.179751   88123 config.go:182] Loaded profile config "newest-cni-875270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:01:56.179825   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:56.182310   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.182663   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:56.182691   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.182828   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:56.183015   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:56.183192   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:56.183312   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:56.183492   88123 main.go:141] libmachine: Using SSH client type: native
	I0717 19:01:56.183652   88123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0717 19:01:56.183672   88123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:01:56.425957   88123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:01:56.425983   88123 machine.go:97] duration metric: took 856.85681ms to provisionDockerMachine
	I0717 19:01:56.425996   88123 start.go:293] postStartSetup for "newest-cni-875270" (driver="kvm2")
	I0717 19:01:56.426008   88123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:01:56.426038   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:56.426359   88123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:01:56.426388   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:56.428886   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.429225   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:56.429246   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.429457   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:56.429763   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:56.429915   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:56.430029   88123 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa Username:docker}
	I0717 19:01:56.507052   88123 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:01:56.510951   88123 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:01:56.510971   88123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 19:01:56.511034   88123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 19:01:56.511131   88123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 19:01:56.511247   88123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:01:56.521461   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 19:01:56.543833   88123 start.go:296] duration metric: took 117.824382ms for postStartSetup
	I0717 19:01:56.543872   88123 fix.go:56] duration metric: took 17.298273997s for fixHost
	I0717 19:01:56.543897   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:56.546455   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.546747   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:56.546774   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.546942   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:56.547140   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:56.547309   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:56.547444   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:56.547589   88123 main.go:141] libmachine: Using SSH client type: native
	I0717 19:01:56.547805   88123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0717 19:01:56.547821   88123 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:01:56.645184   88123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721242916.606134490
	
	I0717 19:01:56.645209   88123 fix.go:216] guest clock: 1721242916.606134490
	I0717 19:01:56.645218   88123 fix.go:229] Guest: 2024-07-17 19:01:56.60613449 +0000 UTC Remote: 2024-07-17 19:01:56.5438769 +0000 UTC m=+17.428245685 (delta=62.25759ms)
	I0717 19:01:56.645242   88123 fix.go:200] guest clock delta is within tolerance: 62.25759ms
	I0717 19:01:56.645267   88123 start.go:83] releasing machines lock for "newest-cni-875270", held for 17.399661528s
	I0717 19:01:56.645296   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:56.645535   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetIP
	I0717 19:01:56.648022   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.648368   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:56.648394   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.648530   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:56.649086   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:56.649252   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:56.649323   88123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:01:56.649444   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:56.649447   88123 ssh_runner.go:195] Run: cat /version.json
	I0717 19:01:56.649524   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:56.652100   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.652359   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.652389   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:56.652410   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.652560   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:56.652718   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:56.652788   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:56.652812   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:56.652858   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:56.652959   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:56.653039   88123 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa Username:docker}
	I0717 19:01:56.653136   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:56.653245   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:56.653410   88123 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa Username:docker}
	I0717 19:01:56.759725   88123 ssh_runner.go:195] Run: systemctl --version
	I0717 19:01:56.765207   88123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:01:56.904724   88123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:01:56.909855   88123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:01:56.909926   88123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:01:56.924183   88123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:01:56.924201   88123 start.go:495] detecting cgroup driver to use...
	I0717 19:01:56.924274   88123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:01:56.938864   88123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:01:56.951391   88123 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:01:56.951432   88123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:01:56.963800   88123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:01:56.975919   88123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:01:57.078705   88123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:01:57.230500   88123 docker.go:233] disabling docker service ...
	I0717 19:01:57.230571   88123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:01:57.243664   88123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:01:57.255137   88123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:01:57.357855   88123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:01:57.459262   88123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:01:57.471941   88123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:01:57.488315   88123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 19:01:57.488374   88123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:57.497320   88123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:01:57.497380   88123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:57.506530   88123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:57.516041   88123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:57.525235   88123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:01:57.534917   88123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:57.544038   88123 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:57.559866   88123 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:57.569023   88123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:01:57.577266   88123 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:01:57.577324   88123 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:01:57.589359   88123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:01:57.597990   88123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:01:57.713004   88123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:01:57.840257   88123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:01:57.840346   88123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:01:57.844475   88123 start.go:563] Will wait 60s for crictl version
	I0717 19:01:57.844519   88123 ssh_runner.go:195] Run: which crictl
	I0717 19:01:57.847828   88123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:01:57.890914   88123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:01:57.891011   88123 ssh_runner.go:195] Run: crio --version
	I0717 19:01:57.917320   88123 ssh_runner.go:195] Run: crio --version
	I0717 19:01:57.944485   88123 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 19:01:57.945612   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetIP
	I0717 19:01:57.948132   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:57.948393   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:57.948442   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:57.948593   88123 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:01:57.952147   88123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:01:57.964663   88123 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0717 19:01:57.965913   88123 kubeadm.go:883] updating cluster {Name:newest-cni-875270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-875270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:01:57.966045   88123 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:01:57.966112   88123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:01:57.999718   88123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 19:01:57.999771   88123 ssh_runner.go:195] Run: which lz4
	I0717 19:01:58.003508   88123 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:01:58.007116   88123 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:01:58.007145   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0717 19:01:59.198311   88123 crio.go:462] duration metric: took 1.194833119s to copy over tarball
	I0717 19:01:59.198383   88123 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:02:01.146156   88123 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.94774368s)
	I0717 19:02:01.146186   88123 crio.go:469] duration metric: took 1.947847166s to extract the tarball
	I0717 19:02:01.146195   88123 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:02:01.181658   88123 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:02:01.223024   88123 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:02:01.223064   88123 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:02:01.223073   88123 kubeadm.go:934] updating node { 192.168.39.225 8443 v1.31.0-beta.0 crio true true} ...
	I0717 19:02:01.223218   88123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-875270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-875270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:02:01.223306   88123 ssh_runner.go:195] Run: crio config
	I0717 19:02:01.275072   88123 cni.go:84] Creating CNI manager for ""
	I0717 19:02:01.275099   88123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:02:01.275114   88123 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0717 19:02:01.275144   88123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-875270 NodeName:newest-cni-875270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:02:01.275321   88123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-875270"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:02:01.275435   88123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 19:02:01.284378   88123 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:02:01.284443   88123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:02:01.292937   88123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0717 19:02:01.308352   88123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 19:02:01.323511   88123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0717 19:02:01.339284   88123 ssh_runner.go:195] Run: grep 192.168.39.225	control-plane.minikube.internal$ /etc/hosts
	I0717 19:02:01.342880   88123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:02:01.353792   88123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:02:01.463458   88123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:02:01.478164   88123 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270 for IP: 192.168.39.225
	I0717 19:02:01.478221   88123 certs.go:194] generating shared ca certs ...
	I0717 19:02:01.478240   88123 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:02:01.478394   88123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 19:02:01.478458   88123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 19:02:01.478474   88123 certs.go:256] generating profile certs ...
	I0717 19:02:01.478589   88123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/client.key
	I0717 19:02:01.478669   88123 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.key.b86eadd9
	I0717 19:02:01.478723   88123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/proxy-client.key
	I0717 19:02:01.478847   88123 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 19:02:01.478885   88123 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 19:02:01.478899   88123 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 19:02:01.478935   88123 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:02:01.478970   88123 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:02:01.479001   88123 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 19:02:01.479062   88123 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 19:02:01.479637   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:02:01.503578   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:02:01.527364   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:02:01.557968   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:02:01.584257   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 19:02:01.609966   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:02:01.633972   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:02:01.654763   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:02:01.675195   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 19:02:01.695373   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:02:01.715679   88123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 19:02:01.736407   88123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:02:01.751232   88123 ssh_runner.go:195] Run: openssl version
	I0717 19:02:01.756444   88123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 19:02:01.765791   88123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 19:02:01.770100   88123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 19:02:01.770148   88123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 19:02:01.775517   88123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:02:01.785136   88123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:02:01.794740   88123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:02:01.798542   88123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:02:01.798585   88123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:02:01.803709   88123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:02:01.813053   88123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 19:02:01.822476   88123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 19:02:01.826557   88123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 19:02:01.826609   88123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 19:02:01.831947   88123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 19:02:01.841470   88123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:02:01.845375   88123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:02:01.850646   88123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:02:01.855823   88123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:02:01.860992   88123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:02:01.866044   88123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:02:01.871076   88123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:02:01.876109   88123 kubeadm.go:392] StartCluster: {Name:newest-cni-875270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-875270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartH
ostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:02:01.876226   88123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:02:01.876274   88123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:02:01.911994   88123 cri.go:89] found id: ""
	I0717 19:02:01.912074   88123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:02:01.921085   88123 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 19:02:01.921100   88123 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 19:02:01.921134   88123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:02:01.929608   88123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:02:01.930132   88123 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-875270" does not appear in /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 19:02:01.930390   88123 kubeconfig.go:62] /home/jenkins/minikube-integration/19283-14386/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-875270" cluster setting kubeconfig missing "newest-cni-875270" context setting]
	I0717 19:02:01.930880   88123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:02:01.932021   88123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:02:01.940197   88123 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.225
	I0717 19:02:01.940227   88123 kubeadm.go:1160] stopping kube-system containers ...
	I0717 19:02:01.940239   88123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:02:01.940277   88123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:02:01.972105   88123 cri.go:89] found id: ""
	I0717 19:02:01.972166   88123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:02:01.987037   88123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:02:01.996039   88123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:02:01.996056   88123 kubeadm.go:157] found existing configuration files:
	
	I0717 19:02:01.996089   88123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:02:02.004070   88123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:02:02.004127   88123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:02:02.013269   88123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:02:02.021444   88123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:02:02.021533   88123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:02:02.029753   88123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:02:02.037602   88123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:02:02.037658   88123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:02:02.045728   88123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:02:02.053542   88123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:02:02.053621   88123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:02:02.061567   88123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:02:02.069881   88123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:02:02.183068   88123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:02:03.287539   88123 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.104421826s)
	I0717 19:02:03.287567   88123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:02:03.491958   88123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:02:03.561822   88123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:02:03.677028   88123 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:02:03.677102   88123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:02:04.177691   88123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:02:04.677810   88123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:02:05.177162   88123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:02:05.677156   88123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:02:05.721143   88123 api_server.go:72] duration metric: took 2.044112366s to wait for apiserver process to appear ...
	I0717 19:02:05.721171   88123 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:02:05.721193   88123 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0717 19:02:05.721671   88123 api_server.go:269] stopped: https://192.168.39.225:8443/healthz: Get "https://192.168.39.225:8443/healthz": dial tcp 192.168.39.225:8443: connect: connection refused
	I0717 19:02:06.222122   88123 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0717 19:02:08.151243   88123 api_server.go:279] https://192.168.39.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:02:08.151275   88123 api_server.go:103] status: https://192.168.39.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:02:08.151290   88123 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0717 19:02:08.190513   88123 api_server.go:279] https://192.168.39.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:02:08.190539   88123 api_server.go:103] status: https://192.168.39.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:02:08.221759   88123 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0717 19:02:08.256542   88123 api_server.go:279] https://192.168.39.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:02:08.256570   88123 api_server.go:103] status: https://192.168.39.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:02:08.722128   88123 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0717 19:02:08.729936   88123 api_server.go:279] https://192.168.39.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:02:08.729964   88123 api_server.go:103] status: https://192.168.39.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:02:09.222198   88123 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0717 19:02:09.231554   88123 api_server.go:279] https://192.168.39.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 19:02:09.231596   88123 api_server.go:103] status: https://192.168.39.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 19:02:09.722112   88123 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0717 19:02:09.727301   88123 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0717 19:02:09.742342   88123 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:02:09.742380   88123 api_server.go:131] duration metric: took 4.021200653s to wait for apiserver health ...
	I0717 19:02:09.742392   88123 cni.go:84] Creating CNI manager for ""
	I0717 19:02:09.742401   88123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:02:09.744515   88123 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 19:02:09.745879   88123 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 19:02:09.763588   88123 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 19:02:09.779733   88123 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:02:09.792332   88123 system_pods.go:59] 8 kube-system pods found
	I0717 19:02:09.792360   88123 system_pods.go:61] "coredns-5cfdc65f69-tpkws" [d2bfb6bb-ac21-4777-821f-a97d45f5fb31] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:02:09.792371   88123 system_pods.go:61] "etcd-newest-cni-875270" [fa1e9c40-fa0c-4170-b577-549e67920a17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:02:09.792379   88123 system_pods.go:61] "kube-apiserver-newest-cni-875270" [6257489f-bd71-4cd3-abdb-211ec8123262] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:02:09.792385   88123 system_pods.go:61] "kube-controller-manager-newest-cni-875270" [a5395233-2591-4485-87f7-4864ba776d9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:02:09.792392   88123 system_pods.go:61] "kube-proxy-ntnl9" [953be0e8-0e4a-4f77-8ee7-1710f944535f] Running
	I0717 19:02:09.792399   88123 system_pods.go:61] "kube-scheduler-newest-cni-875270" [f5c393c9-7313-42f0-a0f1-c579ae71ed4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:02:09.792406   88123 system_pods.go:61] "metrics-server-78fcd8795b-bwx7w" [5d80136e-3a9b-4b63-b223-3d07e9cb1e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:02:09.792412   88123 system_pods.go:61] "storage-provisioner" [8e2e13d3-4309-4eb9-b5cd-d0401b90d7a8] Running
	I0717 19:02:09.792420   88123 system_pods.go:74] duration metric: took 12.667392ms to wait for pod list to return data ...
	I0717 19:02:09.792430   88123 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:02:09.796067   88123 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:02:09.796089   88123 node_conditions.go:123] node cpu capacity is 2
	I0717 19:02:09.796099   88123 node_conditions.go:105] duration metric: took 3.664419ms to run NodePressure ...
	I0717 19:02:09.796115   88123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:02:10.058610   88123 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:02:10.069041   88123 ops.go:34] apiserver oom_adj: -16
	I0717 19:02:10.069065   88123 kubeadm.go:597] duration metric: took 8.14795856s to restartPrimaryControlPlane
	I0717 19:02:10.069077   88123 kubeadm.go:394] duration metric: took 8.192972805s to StartCluster
	I0717 19:02:10.069097   88123 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:02:10.069176   88123 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 19:02:10.070022   88123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:02:10.070231   88123 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:02:10.070286   88123 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 19:02:10.070375   88123 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-875270"
	I0717 19:02:10.070396   88123 addons.go:69] Setting dashboard=true in profile "newest-cni-875270"
	I0717 19:02:10.070408   88123 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-875270"
	I0717 19:02:10.070401   88123 addons.go:69] Setting default-storageclass=true in profile "newest-cni-875270"
	W0717 19:02:10.070416   88123 addons.go:243] addon storage-provisioner should already be in state true
	I0717 19:02:10.070422   88123 addons.go:234] Setting addon dashboard=true in "newest-cni-875270"
	W0717 19:02:10.070441   88123 addons.go:243] addon dashboard should already be in state true
	I0717 19:02:10.070448   88123 host.go:66] Checking if "newest-cni-875270" exists ...
	I0717 19:02:10.070458   88123 config.go:182] Loaded profile config "newest-cni-875270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:02:10.070449   88123 addons.go:69] Setting metrics-server=true in profile "newest-cni-875270"
	I0717 19:02:10.070493   88123 addons.go:234] Setting addon metrics-server=true in "newest-cni-875270"
	W0717 19:02:10.070504   88123 addons.go:243] addon metrics-server should already be in state true
	I0717 19:02:10.070547   88123 host.go:66] Checking if "newest-cni-875270" exists ...
	I0717 19:02:10.070472   88123 host.go:66] Checking if "newest-cni-875270" exists ...
	I0717 19:02:10.070440   88123 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-875270"
	I0717 19:02:10.070854   88123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:02:10.070895   88123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:02:10.070928   88123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:02:10.070928   88123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:02:10.070960   88123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:02:10.070993   88123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:02:10.071018   88123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:02:10.071052   88123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:02:10.072040   88123 out.go:177] * Verifying Kubernetes components...
	I0717 19:02:10.073356   88123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:02:10.086495   88123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38153
	I0717 19:02:10.086505   88123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0717 19:02:10.087021   88123 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:02:10.087130   88123 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:02:10.087584   88123 main.go:141] libmachine: Using API Version  1
	I0717 19:02:10.087609   88123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:02:10.087591   88123 main.go:141] libmachine: Using API Version  1
	I0717 19:02:10.087668   88123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:02:10.087933   88123 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:02:10.087987   88123 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:02:10.088200   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetState
	I0717 19:02:10.088535   88123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:02:10.088582   88123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:02:10.088624   88123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45549
	I0717 19:02:10.088853   88123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41235
	I0717 19:02:10.089051   88123 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:02:10.089354   88123 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:02:10.089592   88123 main.go:141] libmachine: Using API Version  1
	I0717 19:02:10.089614   88123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:02:10.089793   88123 main.go:141] libmachine: Using API Version  1
	I0717 19:02:10.089817   88123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:02:10.089927   88123 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:02:10.090153   88123 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:02:10.090428   88123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:02:10.090468   88123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:02:10.090698   88123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:02:10.090732   88123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:02:10.091524   88123 addons.go:234] Setting addon default-storageclass=true in "newest-cni-875270"
	W0717 19:02:10.091536   88123 addons.go:243] addon default-storageclass should already be in state true
	I0717 19:02:10.091563   88123 host.go:66] Checking if "newest-cni-875270" exists ...
	I0717 19:02:10.091821   88123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:02:10.091846   88123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:02:10.106549   88123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0717 19:02:10.106981   88123 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:02:10.107422   88123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34259
	I0717 19:02:10.107506   88123 main.go:141] libmachine: Using API Version  1
	I0717 19:02:10.107518   88123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:02:10.107868   88123 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:02:10.108077   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetState
	I0717 19:02:10.108095   88123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40215
	I0717 19:02:10.108500   88123 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:02:10.108506   88123 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:02:10.109032   88123 main.go:141] libmachine: Using API Version  1
	I0717 19:02:10.109053   88123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:02:10.109060   88123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
	I0717 19:02:10.109037   88123 main.go:141] libmachine: Using API Version  1
	I0717 19:02:10.109106   88123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:02:10.109350   88123 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:02:10.109614   88123 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:02:10.109803   88123 main.go:141] libmachine: Using API Version  1
	I0717 19:02:10.109813   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetState
	I0717 19:02:10.109820   88123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:02:10.109845   88123 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:02:10.110051   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:02:10.110125   88123 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:02:10.110249   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetState
	I0717 19:02:10.110863   88123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:02:10.110881   88123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:02:10.111348   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:02:10.118011   88123 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0717 19:02:10.118015   88123 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:02:10.118348   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:02:10.119839   88123 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 19:02:10.119946   88123 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:02:10.119961   88123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:02:10.119980   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:02:10.121113   88123 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 19:02:10.121135   88123 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 19:02:10.121155   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:02:10.121220   88123 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0717 19:02:10.122462   88123 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0717 19:02:10.122484   88123 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0717 19:02:10.122502   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:02:10.124668   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:02:10.124827   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:02:10.125252   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:02:10.125272   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:02:10.125439   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:02:10.125456   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:02:10.125806   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:02:10.125866   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:02:10.125946   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:02:10.125999   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:02:10.126082   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:02:10.126132   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:02:10.126208   88123 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa Username:docker}
	I0717 19:02:10.126350   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:02:10.126395   88123 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa Username:docker}
	I0717 19:02:10.126656   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:02:10.126687   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:02:10.126777   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:02:10.126974   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:02:10.127089   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:02:10.127237   88123 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa Username:docker}
	I0717 19:02:10.129823   88123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0717 19:02:10.130184   88123 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:02:10.130683   88123 main.go:141] libmachine: Using API Version  1
	I0717 19:02:10.130700   88123 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:02:10.130998   88123 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:02:10.131154   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetState
	I0717 19:02:10.132579   88123 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:02:10.132780   88123 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:02:10.132793   88123 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:02:10.132807   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:02:10.135682   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:02:10.136220   88123 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:01:49 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:02:10.136326   88123 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:02:10.136547   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:02:10.136713   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:02:10.136873   88123 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:02:10.137013   88123 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa Username:docker}
	I0717 19:02:10.289466   88123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:02:10.306940   88123 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:02:10.307016   88123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:02:10.320134   88123 api_server.go:72] duration metric: took 249.866478ms to wait for apiserver process to appear ...
	I0717 19:02:10.320164   88123 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:02:10.320191   88123 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0717 19:02:10.328887   88123 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0717 19:02:10.329783   88123 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 19:02:10.329807   88123 api_server.go:131] duration metric: took 9.636271ms to wait for apiserver health ...
	I0717 19:02:10.329817   88123 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:02:10.335980   88123 system_pods.go:59] 8 kube-system pods found
	I0717 19:02:10.336015   88123 system_pods.go:61] "coredns-5cfdc65f69-tpkws" [d2bfb6bb-ac21-4777-821f-a97d45f5fb31] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 19:02:10.336025   88123 system_pods.go:61] "etcd-newest-cni-875270" [fa1e9c40-fa0c-4170-b577-549e67920a17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:02:10.336040   88123 system_pods.go:61] "kube-apiserver-newest-cni-875270" [6257489f-bd71-4cd3-abdb-211ec8123262] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:02:10.336049   88123 system_pods.go:61] "kube-controller-manager-newest-cni-875270" [a5395233-2591-4485-87f7-4864ba776d9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:02:10.336058   88123 system_pods.go:61] "kube-proxy-ntnl9" [953be0e8-0e4a-4f77-8ee7-1710f944535f] Running
	I0717 19:02:10.336074   88123 system_pods.go:61] "kube-scheduler-newest-cni-875270" [f5c393c9-7313-42f0-a0f1-c579ae71ed4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:02:10.336085   88123 system_pods.go:61] "metrics-server-78fcd8795b-bwx7w" [5d80136e-3a9b-4b63-b223-3d07e9cb1e38] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 19:02:10.336094   88123 system_pods.go:61] "storage-provisioner" [8e2e13d3-4309-4eb9-b5cd-d0401b90d7a8] Running
	I0717 19:02:10.336106   88123 system_pods.go:74] duration metric: took 6.283106ms to wait for pod list to return data ...
	I0717 19:02:10.336117   88123 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:02:10.339299   88123 default_sa.go:45] found service account: "default"
	I0717 19:02:10.339326   88123 default_sa.go:55] duration metric: took 3.19571ms for default service account to be created ...
	I0717 19:02:10.339345   88123 kubeadm.go:582] duration metric: took 269.082094ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 19:02:10.339368   88123 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:02:10.342358   88123 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 19:02:10.342384   88123 node_conditions.go:123] node cpu capacity is 2
	I0717 19:02:10.342397   88123 node_conditions.go:105] duration metric: took 3.024206ms to run NodePressure ...
	I0717 19:02:10.342415   88123 start.go:241] waiting for startup goroutines ...
	I0717 19:02:10.429339   88123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:02:10.446488   88123 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0717 19:02:10.446512   88123 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0717 19:02:10.463083   88123 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 19:02:10.463106   88123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 19:02:10.464615   88123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:02:10.487514   88123 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 19:02:10.487547   88123 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 19:02:10.492856   88123 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0717 19:02:10.492878   88123 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0717 19:02:10.530853   88123 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:02:10.530882   88123 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 19:02:10.543331   88123 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0717 19:02:10.543361   88123 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0717 19:02:10.570745   88123 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0717 19:02:10.570775   88123 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0717 19:02:10.580833   88123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 19:02:10.611290   88123 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0717 19:02:10.611321   88123 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0717 19:02:10.677846   88123 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0717 19:02:10.677870   88123 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0717 19:02:10.748068   88123 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0717 19:02:10.748088   88123 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0717 19:02:10.806929   88123 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0717 19:02:10.806955   88123 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0717 19:02:10.874521   88123 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 19:02:10.874554   88123 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0717 19:02:10.912389   88123 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 19:02:11.839898   88123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.410521431s)
	I0717 19:02:11.839952   88123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.375310615s)
	I0717 19:02:11.839957   88123 main.go:141] libmachine: Making call to close driver server
	I0717 19:02:11.839969   88123 main.go:141] libmachine: (newest-cni-875270) Calling .Close
	I0717 19:02:11.839973   88123 main.go:141] libmachine: Making call to close driver server
	I0717 19:02:11.839982   88123 main.go:141] libmachine: (newest-cni-875270) Calling .Close
	I0717 19:02:11.840097   88123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.259227081s)
	I0717 19:02:11.840137   88123 main.go:141] libmachine: Making call to close driver server
	I0717 19:02:11.840153   88123 main.go:141] libmachine: (newest-cni-875270) Calling .Close
	I0717 19:02:11.840395   88123 main.go:141] libmachine: (newest-cni-875270) DBG | Closing plugin on server side
	I0717 19:02:11.840438   88123 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:02:11.840448   88123 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:02:11.840453   88123 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:02:11.840457   88123 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:02:11.840462   88123 main.go:141] libmachine: Making call to close driver server
	I0717 19:02:11.840477   88123 main.go:141] libmachine: (newest-cni-875270) Calling .Close
	I0717 19:02:11.840495   88123 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:02:11.840509   88123 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:02:11.840469   88123 main.go:141] libmachine: Making call to close driver server
	I0717 19:02:11.840529   88123 main.go:141] libmachine: (newest-cni-875270) Calling .Close
	I0717 19:02:11.840545   88123 main.go:141] libmachine: (newest-cni-875270) DBG | Closing plugin on server side
	I0717 19:02:11.840517   88123 main.go:141] libmachine: Making call to close driver server
	I0717 19:02:11.840579   88123 main.go:141] libmachine: (newest-cni-875270) Calling .Close
	I0717 19:02:11.840789   88123 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:02:11.840803   88123 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:02:11.840809   88123 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:02:11.840823   88123 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:02:11.840827   88123 main.go:141] libmachine: (newest-cni-875270) DBG | Closing plugin on server side
	I0717 19:02:11.840813   88123 addons.go:475] Verifying addon metrics-server=true in "newest-cni-875270"
	I0717 19:02:11.841125   88123 main.go:141] libmachine: (newest-cni-875270) DBG | Closing plugin on server side
	I0717 19:02:11.841164   88123 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:02:11.841177   88123 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:02:11.849450   88123 main.go:141] libmachine: Making call to close driver server
	I0717 19:02:11.849472   88123 main.go:141] libmachine: (newest-cni-875270) Calling .Close
	I0717 19:02:11.849705   88123 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:02:11.849721   88123 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:02:11.849722   88123 main.go:141] libmachine: (newest-cni-875270) DBG | Closing plugin on server side
	I0717 19:02:11.979886   88123 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.067443058s)
	I0717 19:02:11.979942   88123 main.go:141] libmachine: Making call to close driver server
	I0717 19:02:11.979960   88123 main.go:141] libmachine: (newest-cni-875270) Calling .Close
	I0717 19:02:11.980236   88123 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:02:11.980254   88123 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:02:11.980274   88123 main.go:141] libmachine: (newest-cni-875270) DBG | Closing plugin on server side
	I0717 19:02:11.980349   88123 main.go:141] libmachine: Making call to close driver server
	I0717 19:02:11.980364   88123 main.go:141] libmachine: (newest-cni-875270) Calling .Close
	I0717 19:02:11.980585   88123 main.go:141] libmachine: Successfully made call to close driver server
	I0717 19:02:11.980603   88123 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 19:02:11.980628   88123 main.go:141] libmachine: (newest-cni-875270) DBG | Closing plugin on server side
	I0717 19:02:11.982197   88123 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-875270 addons enable metrics-server
	
	I0717 19:02:11.983662   88123 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0717 19:02:11.984996   88123 addons.go:510] duration metric: took 1.914711992s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0717 19:02:11.985036   88123 start.go:246] waiting for cluster config update ...
	I0717 19:02:11.985051   88123 start.go:255] writing updated cluster config ...
	I0717 19:02:11.985338   88123 ssh_runner.go:195] Run: rm -f paused
	I0717 19:02:12.030894   88123 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 19:02:12.032825   88123 out.go:177] * Done! kubectl is now configured to use "newest-cni-875270" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.378758095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242976378690020,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e241731c-c890-4032-b652-8dd78726bbd6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.379465643Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f125348e-43ee-4a9f-84ec-b8d191a280e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.379529904Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f125348e-43ee-4a9f-84ec-b8d191a280e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.379752246Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:218a44cd8585fbb83856c49696567afd594b4da967ac5ce50a0f632e2a6138cf,PodSandboxId:fb12b6b348e3e8568d69a1524584087652bbf96f2a5c845f8fda2ab30e641139,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241957906796398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9b11611-2008-4a15-a661-62809bd1d4c3,},Annotations:map[string]string{io.kubernetes.container.hash: a189e809,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5119186a70a760fe0c9b05022c775aaabe1a15791e247d7e841827098d306094,PodSandboxId:7d313062ed4075c4bf53961edb6b650038f88792ea8bcc9f3937e4a98ba438b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957428999810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn64r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35cbef26-555a-4693-afac-c739d9238a04,},Annotations:map[string]string{io.kubernetes.container.hash: 415218df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53dbf27ee711bde074b1abeee9bda1c0d830a983bf1acba2b6c8dfce83506a1,PodSandboxId:3407801315db4c603819b6fbd1e8c488045e35c148565ec53bbe65a53f31e252,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957235378366,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fp4tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: dc66092c-9183-4630-93cc-6ec4aa59a928,},Annotations:map[string]string{io.kubernetes.container.hash: 5b65e69b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d0f9b94a63b54376b5b3829bce9836e163d81fa80f24bf00e7f22b57d1a7a,PodSandboxId:455fc8fef39b80fd07b2de059ed7d5455df22677ec7846946cb948b87cbf9023,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721241956624394448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hnb5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 80fc2e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebbb2d90c0739141688761958d1119db0b157d52ffc853e1617aae7b4bf391,PodSandboxId:aabc1991466408493cace4e1341882e1ba856c5c65e55c8fb572ee9a32e8e302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172124193613116032
5,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b95a014b1974e2af4c29b922c88ba23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446730942de93d8fa246bfeb34d266f7bf40a70f2053eb3e9ac31212deff821,PodSandboxId:84fe57441a688f0d08a97f67b75df506036728b8fc5ada6ca6c0e0dbeec677ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:172124193614
5161953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f45fb335c5e2df14c04532f6497e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef9a4c788e9faf3a71500cb6e6711f5724fd07dbb7913c27ce756e69d8f30428,PodSandboxId:037fefa47cc3e2e9904b65a373b2dd771ffd70af156e34a05516c8f22a809237,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172124
1936105803776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b881b6fb22297dfca21c86875467d3,},Annotations:map[string]string{io.kubernetes.container.hash: 83137d99,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b9c11d9cadb0acdcc1067e825e408e9b1254ab6fea64f318e165d96850aa,PodSandboxId:66ba99c8af289e788e9aa97aa463bbeec09c98cbc44cc6fd685aff9ece2cc687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241936071541681,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9381e247719c18d6691e17ec6054a636be76ac6e3cda059f343170a5021edac6,PodSandboxId:e7f0782e6d6c684dbec94e6a3219bf7a955c607c4980918f26af71b26860402a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241647657505331,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f125348e-43ee-4a9f-84ec-b8d191a280e0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.413241316Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9d19eee-3d39-4a3c-9014-b09434d3f722 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.413314460Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9d19eee-3d39-4a3c-9014-b09434d3f722 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.414368445Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17b6cdba-7d44-4a60-a148-adee5e6635af name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.414928422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242976414879458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17b6cdba-7d44-4a60-a148-adee5e6635af name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.415572424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3390b0a1-c877-4613-8ecf-cca2bed35839 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.415637793Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3390b0a1-c877-4613-8ecf-cca2bed35839 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.415867722Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:218a44cd8585fbb83856c49696567afd594b4da967ac5ce50a0f632e2a6138cf,PodSandboxId:fb12b6b348e3e8568d69a1524584087652bbf96f2a5c845f8fda2ab30e641139,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241957906796398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9b11611-2008-4a15-a661-62809bd1d4c3,},Annotations:map[string]string{io.kubernetes.container.hash: a189e809,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5119186a70a760fe0c9b05022c775aaabe1a15791e247d7e841827098d306094,PodSandboxId:7d313062ed4075c4bf53961edb6b650038f88792ea8bcc9f3937e4a98ba438b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957428999810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn64r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35cbef26-555a-4693-afac-c739d9238a04,},Annotations:map[string]string{io.kubernetes.container.hash: 415218df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53dbf27ee711bde074b1abeee9bda1c0d830a983bf1acba2b6c8dfce83506a1,PodSandboxId:3407801315db4c603819b6fbd1e8c488045e35c148565ec53bbe65a53f31e252,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957235378366,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fp4tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: dc66092c-9183-4630-93cc-6ec4aa59a928,},Annotations:map[string]string{io.kubernetes.container.hash: 5b65e69b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d0f9b94a63b54376b5b3829bce9836e163d81fa80f24bf00e7f22b57d1a7a,PodSandboxId:455fc8fef39b80fd07b2de059ed7d5455df22677ec7846946cb948b87cbf9023,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721241956624394448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hnb5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 80fc2e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebbb2d90c0739141688761958d1119db0b157d52ffc853e1617aae7b4bf391,PodSandboxId:aabc1991466408493cace4e1341882e1ba856c5c65e55c8fb572ee9a32e8e302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172124193613116032
5,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b95a014b1974e2af4c29b922c88ba23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446730942de93d8fa246bfeb34d266f7bf40a70f2053eb3e9ac31212deff821,PodSandboxId:84fe57441a688f0d08a97f67b75df506036728b8fc5ada6ca6c0e0dbeec677ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:172124193614
5161953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f45fb335c5e2df14c04532f6497e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef9a4c788e9faf3a71500cb6e6711f5724fd07dbb7913c27ce756e69d8f30428,PodSandboxId:037fefa47cc3e2e9904b65a373b2dd771ffd70af156e34a05516c8f22a809237,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172124
1936105803776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b881b6fb22297dfca21c86875467d3,},Annotations:map[string]string{io.kubernetes.container.hash: 83137d99,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b9c11d9cadb0acdcc1067e825e408e9b1254ab6fea64f318e165d96850aa,PodSandboxId:66ba99c8af289e788e9aa97aa463bbeec09c98cbc44cc6fd685aff9ece2cc687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241936071541681,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9381e247719c18d6691e17ec6054a636be76ac6e3cda059f343170a5021edac6,PodSandboxId:e7f0782e6d6c684dbec94e6a3219bf7a955c607c4980918f26af71b26860402a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241647657505331,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3390b0a1-c877-4613-8ecf-cca2bed35839 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.449440593Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4efb466d-eff0-4d16-a09b-e3de6bcbc76c name=/runtime.v1.RuntimeService/Version
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.449522299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4efb466d-eff0-4d16-a09b-e3de6bcbc76c name=/runtime.v1.RuntimeService/Version
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.450545710Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=41a58082-9c57-42dd-9d35-c482d768c463 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.451397378Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242976451334135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=41a58082-9c57-42dd-9d35-c482d768c463 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.452160145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79fa17d8-8f47-422b-a016-1759f32a9735 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.452222934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79fa17d8-8f47-422b-a016-1759f32a9735 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.452478858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:218a44cd8585fbb83856c49696567afd594b4da967ac5ce50a0f632e2a6138cf,PodSandboxId:fb12b6b348e3e8568d69a1524584087652bbf96f2a5c845f8fda2ab30e641139,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241957906796398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9b11611-2008-4a15-a661-62809bd1d4c3,},Annotations:map[string]string{io.kubernetes.container.hash: a189e809,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5119186a70a760fe0c9b05022c775aaabe1a15791e247d7e841827098d306094,PodSandboxId:7d313062ed4075c4bf53961edb6b650038f88792ea8bcc9f3937e4a98ba438b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957428999810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn64r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35cbef26-555a-4693-afac-c739d9238a04,},Annotations:map[string]string{io.kubernetes.container.hash: 415218df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53dbf27ee711bde074b1abeee9bda1c0d830a983bf1acba2b6c8dfce83506a1,PodSandboxId:3407801315db4c603819b6fbd1e8c488045e35c148565ec53bbe65a53f31e252,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957235378366,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fp4tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: dc66092c-9183-4630-93cc-6ec4aa59a928,},Annotations:map[string]string{io.kubernetes.container.hash: 5b65e69b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d0f9b94a63b54376b5b3829bce9836e163d81fa80f24bf00e7f22b57d1a7a,PodSandboxId:455fc8fef39b80fd07b2de059ed7d5455df22677ec7846946cb948b87cbf9023,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721241956624394448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hnb5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 80fc2e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebbb2d90c0739141688761958d1119db0b157d52ffc853e1617aae7b4bf391,PodSandboxId:aabc1991466408493cace4e1341882e1ba856c5c65e55c8fb572ee9a32e8e302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172124193613116032
5,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b95a014b1974e2af4c29b922c88ba23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446730942de93d8fa246bfeb34d266f7bf40a70f2053eb3e9ac31212deff821,PodSandboxId:84fe57441a688f0d08a97f67b75df506036728b8fc5ada6ca6c0e0dbeec677ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:172124193614
5161953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f45fb335c5e2df14c04532f6497e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef9a4c788e9faf3a71500cb6e6711f5724fd07dbb7913c27ce756e69d8f30428,PodSandboxId:037fefa47cc3e2e9904b65a373b2dd771ffd70af156e34a05516c8f22a809237,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172124
1936105803776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b881b6fb22297dfca21c86875467d3,},Annotations:map[string]string{io.kubernetes.container.hash: 83137d99,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b9c11d9cadb0acdcc1067e825e408e9b1254ab6fea64f318e165d96850aa,PodSandboxId:66ba99c8af289e788e9aa97aa463bbeec09c98cbc44cc6fd685aff9ece2cc687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241936071541681,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9381e247719c18d6691e17ec6054a636be76ac6e3cda059f343170a5021edac6,PodSandboxId:e7f0782e6d6c684dbec94e6a3219bf7a955c607c4980918f26af71b26860402a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241647657505331,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79fa17d8-8f47-422b-a016-1759f32a9735 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.483699110Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d8809b3-41ad-4ea8-8e2b-59f6190c5ced name=/runtime.v1.RuntimeService/Version
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.483817238Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d8809b3-41ad-4ea8-8e2b-59f6190c5ced name=/runtime.v1.RuntimeService/Version
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.485083636Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5577a55-0ae8-4d6e-a054-e58707b0eb20 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.485474856Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242976485455542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5577a55-0ae8-4d6e-a054-e58707b0eb20 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.486244568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62e4453d-d91d-48bf-bbf1-775b8aeb8dc7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.486304917Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62e4453d-d91d-48bf-bbf1-775b8aeb8dc7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:02:56 default-k8s-diff-port-022930 crio[719]: time="2024-07-17 19:02:56.486498383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:218a44cd8585fbb83856c49696567afd594b4da967ac5ce50a0f632e2a6138cf,PodSandboxId:fb12b6b348e3e8568d69a1524584087652bbf96f2a5c845f8fda2ab30e641139,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241957906796398,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9b11611-2008-4a15-a661-62809bd1d4c3,},Annotations:map[string]string{io.kubernetes.container.hash: a189e809,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5119186a70a760fe0c9b05022c775aaabe1a15791e247d7e841827098d306094,PodSandboxId:7d313062ed4075c4bf53961edb6b650038f88792ea8bcc9f3937e4a98ba438b8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957428999810,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jn64r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35cbef26-555a-4693-afac-c739d9238a04,},Annotations:map[string]string{io.kubernetes.container.hash: 415218df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d53dbf27ee711bde074b1abeee9bda1c0d830a983bf1acba2b6c8dfce83506a1,PodSandboxId:3407801315db4c603819b6fbd1e8c488045e35c148565ec53bbe65a53f31e252,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241957235378366,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fp4tg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: dc66092c-9183-4630-93cc-6ec4aa59a928,},Annotations:map[string]string{io.kubernetes.container.hash: 5b65e69b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f5d0f9b94a63b54376b5b3829bce9836e163d81fa80f24bf00e7f22b57d1a7a,PodSandboxId:455fc8fef39b80fd07b2de059ed7d5455df22677ec7846946cb948b87cbf9023,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING
,CreatedAt:1721241956624394448,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hnb5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d,},Annotations:map[string]string{io.kubernetes.container.hash: 80fc2e09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ebbb2d90c0739141688761958d1119db0b157d52ffc853e1617aae7b4bf391,PodSandboxId:aabc1991466408493cace4e1341882e1ba856c5c65e55c8fb572ee9a32e8e302,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:172124193613116032
5,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b95a014b1974e2af4c29b922c88ba23,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446730942de93d8fa246bfeb34d266f7bf40a70f2053eb3e9ac31212deff821,PodSandboxId:84fe57441a688f0d08a97f67b75df506036728b8fc5ada6ca6c0e0dbeec677ec,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:172124193614
5161953,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f45fb335c5e2df14c04532f6497e2b,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef9a4c788e9faf3a71500cb6e6711f5724fd07dbb7913c27ce756e69d8f30428,PodSandboxId:037fefa47cc3e2e9904b65a373b2dd771ffd70af156e34a05516c8f22a809237,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:172124
1936105803776,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b881b6fb22297dfca21c86875467d3,},Annotations:map[string]string{io.kubernetes.container.hash: 83137d99,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8e6b9c11d9cadb0acdcc1067e825e408e9b1254ab6fea64f318e165d96850aa,PodSandboxId:66ba99c8af289e788e9aa97aa463bbeec09c98cbc44cc6fd685aff9ece2cc687,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241936071541681,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9381e247719c18d6691e17ec6054a636be76ac6e3cda059f343170a5021edac6,PodSandboxId:e7f0782e6d6c684dbec94e6a3219bf7a955c607c4980918f26af71b26860402a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241647657505331,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-022930,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 938e7a2adce85acd36d4b5495f4d0c78,},Annotations:map[string]string{io.kubernetes.container.hash: 253c7078,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62e4453d-d91d-48bf-bbf1-775b8aeb8dc7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	218a44cd8585f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   fb12b6b348e3e       storage-provisioner
	5119186a70a76       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   7d313062ed407       coredns-7db6d8ff4d-jn64r
	d53dbf27ee711       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   3407801315db4       coredns-7db6d8ff4d-fp4tg
	0f5d0f9b94a63       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   16 minutes ago      Running             kube-proxy                0                   455fc8fef39b8       kube-proxy-hnb5v
	a446730942de9       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   17 minutes ago      Running             kube-controller-manager   2                   84fe57441a688       kube-controller-manager-default-k8s-diff-port-022930
	26ebbb2d90c07       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   17 minutes ago      Running             kube-scheduler            2                   aabc199146640       kube-scheduler-default-k8s-diff-port-022930
	ef9a4c788e9fa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   17 minutes ago      Running             etcd                      2                   037fefa47cc3e       etcd-default-k8s-diff-port-022930
	d8e6b9c11d9ca       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   17 minutes ago      Running             kube-apiserver            2                   66ba99c8af289       kube-apiserver-default-k8s-diff-port-022930
	9381e247719c1       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   22 minutes ago      Exited              kube-apiserver            1                   e7f0782e6d6c6       kube-apiserver-default-k8s-diff-port-022930
	
	
	==> coredns [5119186a70a760fe0c9b05022c775aaabe1a15791e247d7e841827098d306094] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d53dbf27ee711bde074b1abeee9bda1c0d830a983bf1acba2b6c8dfce83506a1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-022930
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-022930
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=default-k8s-diff-port-022930
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_45_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:45:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-022930
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 19:02:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 19:01:20 +0000   Wed, 17 Jul 2024 18:45:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 19:01:20 +0000   Wed, 17 Jul 2024 18:45:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 19:01:20 +0000   Wed, 17 Jul 2024 18:45:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 19:01:20 +0000   Wed, 17 Jul 2024 18:45:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.245
	  Hostname:    default-k8s-diff-port-022930
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1726fce1c58f432685c5f3f3c36f29de
	  System UUID:                1726fce1-c58f-4326-85c5-f3f3c36f29de
	  Boot ID:                    91256dde-6391-4dcc-8a3f-294e4be086b2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fp4tg                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-7db6d8ff4d-jn64r                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-default-k8s-diff-port-022930                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-022930             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-022930    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-hnb5v                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-022930             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 metrics-server-569cc877fc-pfmwt                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node default-k8s-diff-port-022930 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node default-k8s-diff-port-022930 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node default-k8s-diff-port-022930 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m   node-controller  Node default-k8s-diff-port-022930 event: Registered Node default-k8s-diff-port-022930 in Controller
	
	
	==> dmesg <==
	[  +0.052949] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.048252] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.764057] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.954080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.389021] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.099061] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.058762] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065474] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.171592] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.165371] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.277171] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[  +4.139605] systemd-fstab-generator[800]: Ignoring "noauto" option for root device
	[  +1.493155] systemd-fstab-generator[922]: Ignoring "noauto" option for root device
	[  +0.065565] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.510432] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.615223] kauditd_printk_skb: 79 callbacks suppressed
	[Jul17 18:45] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.786552] systemd-fstab-generator[3583]: Ignoring "noauto" option for root device
	[  +4.386271] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.681400] systemd-fstab-generator[3908]: Ignoring "noauto" option for root device
	[ +14.826199] systemd-fstab-generator[4110]: Ignoring "noauto" option for root device
	[  +0.105221] kauditd_printk_skb: 14 callbacks suppressed
	[Jul17 18:47] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [ef9a4c788e9faf3a71500cb6e6711f5724fd07dbb7913c27ce756e69d8f30428] <==
	{"level":"info","ts":"2024-07-17T18:45:36.863868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-17T18:45:36.863897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 received MsgPreVoteResp from 8287693677e84cf6 at term 1"}
	{"level":"info","ts":"2024-07-17T18:45:36.86391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 became candidate at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:36.863918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 received MsgVoteResp from 8287693677e84cf6 at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:36.863935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8287693677e84cf6 became leader at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:36.863947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8287693677e84cf6 elected leader 8287693677e84cf6 at term 2"}
	{"level":"info","ts":"2024-07-17T18:45:36.868028Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:36.871956Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8287693677e84cf6","local-member-attributes":"{Name:default-k8s-diff-port-022930 ClientURLs:[https://192.168.50.245:2379]}","request-path":"/0/members/8287693677e84cf6/attributes","cluster-id":"6e727aea1cd049c6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:45:36.873794Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e727aea1cd049c6","local-member-id":"8287693677e84cf6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:36.873959Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:36.873992Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:36.874058Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:45:36.88478Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:45:36.885543Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.245:2379"}
	{"level":"info","ts":"2024-07-17T18:45:36.891782Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:45:36.897824Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T18:45:36.91197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T18:55:37.453932Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2024-07-17T18:55:37.463286Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":682,"took":"9.027595ms","hash":2367149389,"current-db-size-bytes":2211840,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2211840,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-17T18:55:37.463399Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2367149389,"revision":682,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T19:00:37.461533Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":924}
	{"level":"info","ts":"2024-07-17T19:00:37.465805Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":924,"took":"3.426871ms","hash":2755393956,"current-db-size-bytes":2211840,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1564672,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-17T19:00:37.465884Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2755393956,"revision":924,"compact-revision":682}
	{"level":"info","ts":"2024-07-17T19:01:14.744568Z","caller":"traceutil/trace.go:171","msg":"trace[701371869] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"143.213409ms","start":"2024-07-17T19:01:14.601311Z","end":"2024-07-17T19:01:14.744525Z","steps":["trace[701371869] 'process raft request'  (duration: 143.00706ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:02:04.774765Z","caller":"traceutil/trace.go:171","msg":"trace[122931094] transaction","detail":"{read_only:false; response_revision:1240; number_of_response:1; }","duration":"249.041631ms","start":"2024-07-17T19:02:04.525653Z","end":"2024-07-17T19:02:04.774695Z","steps":["trace[122931094] 'process raft request'  (duration: 248.863461ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:02:56 up 22 min,  0 users,  load average: 0.17, 0.16, 0.15
	Linux default-k8s-diff-port-022930 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9381e247719c18d6691e17ec6054a636be76ac6e3cda059f343170a5021edac6] <==
	W0717 18:45:27.789563       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.801014       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.804445       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.817902       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.856336       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.894776       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.907942       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:27.975805       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:28.016961       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:28.045172       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:28.170892       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:28.258995       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:28.378868       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:28.602072       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:31.721801       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.181991       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.262818       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.427102       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.518946       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.576976       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.754347       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.757871       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.849280       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.974281       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:32.981869       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d8e6b9c11d9cadb0acdcc1067e825e408e9b1254ab6fea64f318e165d96850aa] <==
	I0717 18:56:39.797304       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:58:39.796430       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:58:39.796531       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 18:58:39.796540       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:58:39.797616       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:58:39.797762       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 18:58:39.797795       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:00:38.802019       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:00:38.802840       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 19:00:39.803505       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:00:39.803556       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:00:39.803598       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:00:39.803687       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:00:39.803814       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:00:39.804958       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:01:39.804746       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:01:39.805002       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:01:39.805033       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:01:39.805137       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:01:39.805238       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:01:39.807051       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a446730942de93d8fa246bfeb34d266f7bf40a70f2053eb3e9ac31212deff821] <==
	I0717 18:57:25.935997       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:57:55.445135       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:57:55.944593       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:58:25.450618       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:58:25.952757       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:58:55.455546       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:58:55.964650       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:59:25.461207       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:59:25.972302       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:59:55.467252       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:59:55.981166       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:00:25.472573       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:00:25.989087       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:00:55.480265       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:00:56.007625       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:01:25.486376       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:01:26.019067       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:01:55.491367       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:01:56.026754       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 19:02:04.780411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="348.711µs"
	I0717 19:02:17.536685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="72.253µs"
	E0717 19:02:25.496110       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:02:26.036654       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:02:55.501343       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:02:56.044110       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0f5d0f9b94a63b54376b5b3829bce9836e163d81fa80f24bf00e7f22b57d1a7a] <==
	I0717 18:45:56.926246       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:45:56.940493       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.245"]
	I0717 18:45:57.001049       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:45:57.001086       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:45:57.001101       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:45:57.006162       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:45:57.006401       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:45:57.006413       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:45:57.008143       1 config.go:192] "Starting service config controller"
	I0717 18:45:57.008154       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:45:57.008195       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:45:57.008200       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:45:57.008575       1 config.go:319] "Starting node config controller"
	I0717 18:45:57.008583       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:45:57.108850       1 shared_informer.go:320] Caches are synced for node config
	I0717 18:45:57.108948       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:45:57.109001       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [26ebbb2d90c0739141688761958d1119db0b157d52ffc853e1617aae7b4bf391] <==
	W0717 18:45:38.811327       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:45:38.811349       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:45:38.811386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:45:38.811406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:45:38.811603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:45:38.811693       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 18:45:39.622254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:45:39.622319       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:45:39.648065       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:45:39.648262       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:45:39.674111       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:45:39.674223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 18:45:39.790761       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:45:39.790880       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:45:39.853201       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:45:39.853283       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:45:39.896746       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:45:39.896961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:45:39.922274       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:45:39.922360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 18:45:39.947780       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:45:39.948924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 18:45:39.985747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:45:39.986101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0717 18:45:41.900643       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 19:00:41 default-k8s-diff-port-022930 kubelet[3915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:00:41 default-k8s-diff-port-022930 kubelet[3915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:00:43 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:00:43.515567    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 19:00:58 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:00:58.513797    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 19:01:09 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:01:09.518011    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 19:01:23 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:01:23.514779    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 19:01:37 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:01:37.514885    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 19:01:41 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:01:41.541031    3915 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:01:41 default-k8s-diff-port-022930 kubelet[3915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:01:41 default-k8s-diff-port-022930 kubelet[3915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:01:41 default-k8s-diff-port-022930 kubelet[3915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:01:41 default-k8s-diff-port-022930 kubelet[3915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:01:52 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:01:52.527024    3915 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 19:01:52 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:01:52.527341    3915 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 17 19:01:52 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:01:52.527580    3915 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99xxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-pfmwt_kube-system(39616dfc-215e-4af5-90f7-12fc28304494): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 17 19:01:52 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:01:52.527819    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 19:02:04 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:02:04.514409    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 19:02:17 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:02:17.516242    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 19:02:32 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:02:32.514751    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	Jul 17 19:02:41 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:02:41.542507    3915 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:02:41 default-k8s-diff-port-022930 kubelet[3915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:02:41 default-k8s-diff-port-022930 kubelet[3915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:02:41 default-k8s-diff-port-022930 kubelet[3915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:02:41 default-k8s-diff-port-022930 kubelet[3915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:02:47 default-k8s-diff-port-022930 kubelet[3915]: E0717 19:02:47.515339    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-pfmwt" podUID="39616dfc-215e-4af5-90f7-12fc28304494"
	
	
	==> storage-provisioner [218a44cd8585fbb83856c49696567afd594b4da967ac5ce50a0f632e2a6138cf] <==
	I0717 18:45:58.006792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:45:58.015577       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:45:58.015616       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:45:58.026335       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:45:58.028218       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-022930_0ef1e56d-bcfe-49d9-8bc8-60eb7d40d4bb!
	I0717 18:45:58.029070       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b39b949f-dc71-4797-979a-a1feb97bb555", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-022930_0ef1e56d-bcfe-49d9-8bc8-60eb7d40d4bb became leader
	I0717 18:45:58.128686       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-022930_0ef1e56d-bcfe-49d9-8bc8-60eb7d40d4bb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-022930 -n default-k8s-diff-port-022930
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-022930 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-pfmwt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-022930 describe pod metrics-server-569cc877fc-pfmwt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-022930 describe pod metrics-server-569cc877fc-pfmwt: exit status 1 (56.30609ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-pfmwt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-022930 describe pod metrics-server-569cc877fc-pfmwt: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (474.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (360.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-527415 -n embed-certs-527415
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-17 19:01:22.84704399 +0000 UTC m=+6597.576239953
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-527415 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-527415 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.518µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-527415 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-527415 -n embed-certs-527415
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-527415 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-527415 logs -n 25: (1.169474323s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-527415            | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-371172                                        | pause-371172                 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-341716 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | disable-driver-mounts-341716                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:34 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-066175             | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC | 17 Jul 24 18:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-066175                                   | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-022930  | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC | 17 Jul 24 18:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-527415                 | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-019549        | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-066175                  | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-066175 --memory=2200                     | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:45 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-019549             | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-022930       | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC | 17 Jul 24 18:45 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 19:00 UTC | 17 Jul 24 19:00 UTC |
	| start   | -p newest-cni-875270 --memory=2200 --alsologtostderr   | newest-cni-875270            | jenkins | v1.33.1 | 17 Jul 24 19:00 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-066175                                   | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 19:00 UTC | 17 Jul 24 19:00 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 19:00:43
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:00:43.979043   87211 out.go:291] Setting OutFile to fd 1 ...
	I0717 19:00:43.979200   87211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:00:43.979207   87211 out.go:304] Setting ErrFile to fd 2...
	I0717 19:00:43.979213   87211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 19:00:43.979429   87211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 19:00:43.980043   87211 out.go:298] Setting JSON to false
	I0717 19:00:43.981243   87211 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9787,"bootTime":1721233057,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:00:43.981316   87211 start.go:139] virtualization: kvm guest
	I0717 19:00:43.983421   87211 out.go:177] * [newest-cni-875270] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:00:43.984734   87211 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 19:00:43.984809   87211 notify.go:220] Checking for updates...
	I0717 19:00:43.987182   87211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:00:43.988431   87211 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 19:00:43.989690   87211 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 19:00:43.990872   87211 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:00:43.991912   87211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:00:43.993425   87211 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:00:43.993510   87211 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 19:00:43.993597   87211 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:00:43.993687   87211 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 19:00:44.030158   87211 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 19:00:44.031277   87211 start.go:297] selected driver: kvm2
	I0717 19:00:44.031300   87211 start.go:901] validating driver "kvm2" against <nil>
	I0717 19:00:44.031315   87211 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:00:44.032177   87211 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:00:44.032296   87211 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 19:00:44.047964   87211 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 19:00:44.048010   87211 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0717 19:00:44.048032   87211 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0717 19:00:44.048321   87211 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 19:00:44.048349   87211 cni.go:84] Creating CNI manager for ""
	I0717 19:00:44.048361   87211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:00:44.048371   87211 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 19:00:44.048443   87211 start.go:340] cluster config:
	{Name:newest-cni-875270 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-875270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:00:44.048578   87211 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:00:44.050341   87211 out.go:177] * Starting "newest-cni-875270" primary control-plane node in "newest-cni-875270" cluster
	I0717 19:00:44.051379   87211 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:00:44.051411   87211 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 19:00:44.051418   87211 cache.go:56] Caching tarball of preloaded images
	I0717 19:00:44.051521   87211 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:00:44.051533   87211 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0717 19:00:44.051620   87211 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/config.json ...
	I0717 19:00:44.051637   87211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/config.json: {Name:mk044a77ab4f3aa203a7005f385efe96ee1d0310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:00:44.051762   87211 start.go:360] acquireMachinesLock for newest-cni-875270: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 19:00:44.051789   87211 start.go:364] duration metric: took 15.052µs to acquireMachinesLock for "newest-cni-875270"
	I0717 19:00:44.051805   87211 start.go:93] Provisioning new machine with config: &{Name:newest-cni-875270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-875270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:00:44.051867   87211 start.go:125] createHost starting for "" (driver="kvm2")
	I0717 19:00:44.053362   87211 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0717 19:00:44.053494   87211 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 19:00:44.053531   87211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 19:00:44.068348   87211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0717 19:00:44.068858   87211 main.go:141] libmachine: () Calling .GetVersion
	I0717 19:00:44.069443   87211 main.go:141] libmachine: Using API Version  1
	I0717 19:00:44.069466   87211 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 19:00:44.069811   87211 main.go:141] libmachine: () Calling .GetMachineName
	I0717 19:00:44.069989   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetMachineName
	I0717 19:00:44.070148   87211 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:00:44.070293   87211 start.go:159] libmachine.API.Create for "newest-cni-875270" (driver="kvm2")
	I0717 19:00:44.070324   87211 client.go:168] LocalClient.Create starting
	I0717 19:00:44.070351   87211 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem
	I0717 19:00:44.070412   87211 main.go:141] libmachine: Decoding PEM data...
	I0717 19:00:44.070428   87211 main.go:141] libmachine: Parsing certificate...
	I0717 19:00:44.070476   87211 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem
	I0717 19:00:44.070494   87211 main.go:141] libmachine: Decoding PEM data...
	I0717 19:00:44.070503   87211 main.go:141] libmachine: Parsing certificate...
	I0717 19:00:44.070534   87211 main.go:141] libmachine: Running pre-create checks...
	I0717 19:00:44.070542   87211 main.go:141] libmachine: (newest-cni-875270) Calling .PreCreateCheck
	I0717 19:00:44.070902   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetConfigRaw
	I0717 19:00:44.071271   87211 main.go:141] libmachine: Creating machine...
	I0717 19:00:44.071292   87211 main.go:141] libmachine: (newest-cni-875270) Calling .Create
	I0717 19:00:44.071424   87211 main.go:141] libmachine: (newest-cni-875270) Creating KVM machine...
	I0717 19:00:44.072642   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found existing default KVM network
	I0717 19:00:44.074382   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:44.074238   87234 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f800}
	I0717 19:00:44.074413   87211 main.go:141] libmachine: (newest-cni-875270) DBG | created network xml: 
	I0717 19:00:44.074422   87211 main.go:141] libmachine: (newest-cni-875270) DBG | <network>
	I0717 19:00:44.074427   87211 main.go:141] libmachine: (newest-cni-875270) DBG |   <name>mk-newest-cni-875270</name>
	I0717 19:00:44.074433   87211 main.go:141] libmachine: (newest-cni-875270) DBG |   <dns enable='no'/>
	I0717 19:00:44.074437   87211 main.go:141] libmachine: (newest-cni-875270) DBG |   
	I0717 19:00:44.074443   87211 main.go:141] libmachine: (newest-cni-875270) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0717 19:00:44.074448   87211 main.go:141] libmachine: (newest-cni-875270) DBG |     <dhcp>
	I0717 19:00:44.074454   87211 main.go:141] libmachine: (newest-cni-875270) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0717 19:00:44.074458   87211 main.go:141] libmachine: (newest-cni-875270) DBG |     </dhcp>
	I0717 19:00:44.074463   87211 main.go:141] libmachine: (newest-cni-875270) DBG |   </ip>
	I0717 19:00:44.074472   87211 main.go:141] libmachine: (newest-cni-875270) DBG |   
	I0717 19:00:44.074477   87211 main.go:141] libmachine: (newest-cni-875270) DBG | </network>
	I0717 19:00:44.074482   87211 main.go:141] libmachine: (newest-cni-875270) DBG | 
	I0717 19:00:44.079690   87211 main.go:141] libmachine: (newest-cni-875270) DBG | trying to create private KVM network mk-newest-cni-875270 192.168.39.0/24...
	I0717 19:00:44.150454   87211 main.go:141] libmachine: (newest-cni-875270) DBG | private KVM network mk-newest-cni-875270 192.168.39.0/24 created
	I0717 19:00:44.150478   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:44.150427   87234 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 19:00:44.150491   87211 main.go:141] libmachine: (newest-cni-875270) Setting up store path in /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270 ...
	I0717 19:00:44.150507   87211 main.go:141] libmachine: (newest-cni-875270) Building disk image from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 19:00:44.150600   87211 main.go:141] libmachine: (newest-cni-875270) Downloading /home/jenkins/minikube-integration/19283-14386/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso...
	I0717 19:00:44.386986   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:44.386867   87234 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa...
	I0717 19:00:44.565767   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:44.565598   87234 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/newest-cni-875270.rawdisk...
	I0717 19:00:44.565806   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Writing magic tar header
	I0717 19:00:44.565826   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Writing SSH key tar header
	I0717 19:00:44.565839   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:44.565763   87234 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270 ...
	I0717 19:00:44.565969   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270
	I0717 19:00:44.566000   87211 main.go:141] libmachine: (newest-cni-875270) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270 (perms=drwx------)
	I0717 19:00:44.566012   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube/machines
	I0717 19:00:44.566028   87211 main.go:141] libmachine: (newest-cni-875270) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube/machines (perms=drwxr-xr-x)
	I0717 19:00:44.566047   87211 main.go:141] libmachine: (newest-cni-875270) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386/.minikube (perms=drwxr-xr-x)
	I0717 19:00:44.566062   87211 main.go:141] libmachine: (newest-cni-875270) Setting executable bit set on /home/jenkins/minikube-integration/19283-14386 (perms=drwxrwxr-x)
	I0717 19:00:44.566080   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 19:00:44.566094   87211 main.go:141] libmachine: (newest-cni-875270) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0717 19:00:44.566107   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19283-14386
	I0717 19:00:44.566122   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0717 19:00:44.566134   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home/jenkins
	I0717 19:00:44.566152   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Checking permissions on dir: /home
	I0717 19:00:44.566163   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Skipping /home - not owner
	I0717 19:00:44.566174   87211 main.go:141] libmachine: (newest-cni-875270) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0717 19:00:44.566189   87211 main.go:141] libmachine: (newest-cni-875270) Creating domain...
	I0717 19:00:44.567326   87211 main.go:141] libmachine: (newest-cni-875270) define libvirt domain using xml: 
	I0717 19:00:44.567346   87211 main.go:141] libmachine: (newest-cni-875270) <domain type='kvm'>
	I0717 19:00:44.567354   87211 main.go:141] libmachine: (newest-cni-875270)   <name>newest-cni-875270</name>
	I0717 19:00:44.567362   87211 main.go:141] libmachine: (newest-cni-875270)   <memory unit='MiB'>2200</memory>
	I0717 19:00:44.567367   87211 main.go:141] libmachine: (newest-cni-875270)   <vcpu>2</vcpu>
	I0717 19:00:44.567371   87211 main.go:141] libmachine: (newest-cni-875270)   <features>
	I0717 19:00:44.567376   87211 main.go:141] libmachine: (newest-cni-875270)     <acpi/>
	I0717 19:00:44.567383   87211 main.go:141] libmachine: (newest-cni-875270)     <apic/>
	I0717 19:00:44.567390   87211 main.go:141] libmachine: (newest-cni-875270)     <pae/>
	I0717 19:00:44.567399   87211 main.go:141] libmachine: (newest-cni-875270)     
	I0717 19:00:44.567408   87211 main.go:141] libmachine: (newest-cni-875270)   </features>
	I0717 19:00:44.567418   87211 main.go:141] libmachine: (newest-cni-875270)   <cpu mode='host-passthrough'>
	I0717 19:00:44.567427   87211 main.go:141] libmachine: (newest-cni-875270)   
	I0717 19:00:44.567432   87211 main.go:141] libmachine: (newest-cni-875270)   </cpu>
	I0717 19:00:44.567437   87211 main.go:141] libmachine: (newest-cni-875270)   <os>
	I0717 19:00:44.567441   87211 main.go:141] libmachine: (newest-cni-875270)     <type>hvm</type>
	I0717 19:00:44.567449   87211 main.go:141] libmachine: (newest-cni-875270)     <boot dev='cdrom'/>
	I0717 19:00:44.567453   87211 main.go:141] libmachine: (newest-cni-875270)     <boot dev='hd'/>
	I0717 19:00:44.567459   87211 main.go:141] libmachine: (newest-cni-875270)     <bootmenu enable='no'/>
	I0717 19:00:44.567465   87211 main.go:141] libmachine: (newest-cni-875270)   </os>
	I0717 19:00:44.567470   87211 main.go:141] libmachine: (newest-cni-875270)   <devices>
	I0717 19:00:44.567475   87211 main.go:141] libmachine: (newest-cni-875270)     <disk type='file' device='cdrom'>
	I0717 19:00:44.567486   87211 main.go:141] libmachine: (newest-cni-875270)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/boot2docker.iso'/>
	I0717 19:00:44.567514   87211 main.go:141] libmachine: (newest-cni-875270)       <target dev='hdc' bus='scsi'/>
	I0717 19:00:44.567524   87211 main.go:141] libmachine: (newest-cni-875270)       <readonly/>
	I0717 19:00:44.567534   87211 main.go:141] libmachine: (newest-cni-875270)     </disk>
	I0717 19:00:44.567544   87211 main.go:141] libmachine: (newest-cni-875270)     <disk type='file' device='disk'>
	I0717 19:00:44.567553   87211 main.go:141] libmachine: (newest-cni-875270)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0717 19:00:44.567564   87211 main.go:141] libmachine: (newest-cni-875270)       <source file='/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/newest-cni-875270.rawdisk'/>
	I0717 19:00:44.567572   87211 main.go:141] libmachine: (newest-cni-875270)       <target dev='hda' bus='virtio'/>
	I0717 19:00:44.567597   87211 main.go:141] libmachine: (newest-cni-875270)     </disk>
	I0717 19:00:44.567624   87211 main.go:141] libmachine: (newest-cni-875270)     <interface type='network'>
	I0717 19:00:44.567639   87211 main.go:141] libmachine: (newest-cni-875270)       <source network='mk-newest-cni-875270'/>
	I0717 19:00:44.567651   87211 main.go:141] libmachine: (newest-cni-875270)       <model type='virtio'/>
	I0717 19:00:44.567663   87211 main.go:141] libmachine: (newest-cni-875270)     </interface>
	I0717 19:00:44.567679   87211 main.go:141] libmachine: (newest-cni-875270)     <interface type='network'>
	I0717 19:00:44.567691   87211 main.go:141] libmachine: (newest-cni-875270)       <source network='default'/>
	I0717 19:00:44.567710   87211 main.go:141] libmachine: (newest-cni-875270)       <model type='virtio'/>
	I0717 19:00:44.567752   87211 main.go:141] libmachine: (newest-cni-875270)     </interface>
	I0717 19:00:44.567772   87211 main.go:141] libmachine: (newest-cni-875270)     <serial type='pty'>
	I0717 19:00:44.567782   87211 main.go:141] libmachine: (newest-cni-875270)       <target port='0'/>
	I0717 19:00:44.567789   87211 main.go:141] libmachine: (newest-cni-875270)     </serial>
	I0717 19:00:44.567800   87211 main.go:141] libmachine: (newest-cni-875270)     <console type='pty'>
	I0717 19:00:44.567811   87211 main.go:141] libmachine: (newest-cni-875270)       <target type='serial' port='0'/>
	I0717 19:00:44.567821   87211 main.go:141] libmachine: (newest-cni-875270)     </console>
	I0717 19:00:44.567831   87211 main.go:141] libmachine: (newest-cni-875270)     <rng model='virtio'>
	I0717 19:00:44.567849   87211 main.go:141] libmachine: (newest-cni-875270)       <backend model='random'>/dev/random</backend>
	I0717 19:00:44.567884   87211 main.go:141] libmachine: (newest-cni-875270)     </rng>
	I0717 19:00:44.567898   87211 main.go:141] libmachine: (newest-cni-875270)     
	I0717 19:00:44.567902   87211 main.go:141] libmachine: (newest-cni-875270)     
	I0717 19:00:44.567910   87211 main.go:141] libmachine: (newest-cni-875270)   </devices>
	I0717 19:00:44.567915   87211 main.go:141] libmachine: (newest-cni-875270) </domain>
	I0717 19:00:44.567923   87211 main.go:141] libmachine: (newest-cni-875270) 
	I0717 19:00:44.572539   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:3f:27:1d in network default
	I0717 19:00:44.573331   87211 main.go:141] libmachine: (newest-cni-875270) Ensuring networks are active...
	I0717 19:00:44.573367   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:44.574135   87211 main.go:141] libmachine: (newest-cni-875270) Ensuring network default is active
	I0717 19:00:44.574471   87211 main.go:141] libmachine: (newest-cni-875270) Ensuring network mk-newest-cni-875270 is active
	I0717 19:00:44.575097   87211 main.go:141] libmachine: (newest-cni-875270) Getting domain xml...
	I0717 19:00:44.575849   87211 main.go:141] libmachine: (newest-cni-875270) Creating domain...
	I0717 19:00:45.828532   87211 main.go:141] libmachine: (newest-cni-875270) Waiting to get IP...
	I0717 19:00:45.829469   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:45.829885   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:45.829912   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:45.829866   87234 retry.go:31] will retry after 263.920251ms: waiting for machine to come up
	I0717 19:00:46.095335   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:46.095954   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:46.095981   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:46.095911   87234 retry.go:31] will retry after 363.178186ms: waiting for machine to come up
	I0717 19:00:46.460512   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:46.460977   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:46.461012   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:46.460928   87234 retry.go:31] will retry after 409.665021ms: waiting for machine to come up
	I0717 19:00:46.872744   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:46.873247   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:46.873273   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:46.873205   87234 retry.go:31] will retry after 563.902745ms: waiting for machine to come up
	I0717 19:00:47.439068   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:47.439656   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:47.439683   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:47.439604   87234 retry.go:31] will retry after 733.359581ms: waiting for machine to come up
	I0717 19:00:48.174089   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:48.174581   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:48.174605   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:48.174521   87234 retry.go:31] will retry after 942.690499ms: waiting for machine to come up
	I0717 19:00:49.119131   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:49.119532   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:49.119562   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:49.119511   87234 retry.go:31] will retry after 1.141544671s: waiting for machine to come up
	I0717 19:00:50.262357   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:50.262777   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:50.262801   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:50.262738   87234 retry.go:31] will retry after 1.467163596s: waiting for machine to come up
	I0717 19:00:51.731003   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:51.731354   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:51.731376   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:51.731308   87234 retry.go:31] will retry after 1.199886437s: waiting for machine to come up
	I0717 19:00:52.932457   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:52.933110   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:52.933139   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:52.933070   87234 retry.go:31] will retry after 1.540490534s: waiting for machine to come up
	I0717 19:00:54.475087   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:54.475857   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:54.475899   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:54.475782   87234 retry.go:31] will retry after 1.763306289s: waiting for machine to come up
	I0717 19:00:56.241759   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:56.242146   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:56.242174   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:56.242110   87234 retry.go:31] will retry after 3.400168516s: waiting for machine to come up
	I0717 19:00:59.644132   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:00:59.644648   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find current IP address of domain newest-cni-875270 in network mk-newest-cni-875270
	I0717 19:00:59.644689   87211 main.go:141] libmachine: (newest-cni-875270) DBG | I0717 19:00:59.644616   87234 retry.go:31] will retry after 3.544348471s: waiting for machine to come up
	I0717 19:01:03.190893   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.191400   87211 main.go:141] libmachine: (newest-cni-875270) Found IP for machine: 192.168.39.225
	I0717 19:01:03.191426   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has current primary IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.191432   87211 main.go:141] libmachine: (newest-cni-875270) Reserving static IP address...
	I0717 19:01:03.191806   87211 main.go:141] libmachine: (newest-cni-875270) DBG | unable to find host DHCP lease matching {name: "newest-cni-875270", mac: "52:54:00:2d:7e:1a", ip: "192.168.39.225"} in network mk-newest-cni-875270
	I0717 19:01:03.265128   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Getting to WaitForSSH function...
	I0717 19:01:03.265157   87211 main.go:141] libmachine: (newest-cni-875270) Reserved static IP address: 192.168.39.225
	I0717 19:01:03.265171   87211 main.go:141] libmachine: (newest-cni-875270) Waiting for SSH to be available...
	I0717 19:01:03.268147   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.268608   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:03.268635   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.268777   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Using SSH client type: external
	I0717 19:01:03.268802   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa (-rw-------)
	I0717 19:01:03.268874   87211 main.go:141] libmachine: (newest-cni-875270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 19:01:03.268906   87211 main.go:141] libmachine: (newest-cni-875270) DBG | About to run SSH command:
	I0717 19:01:03.268938   87211 main.go:141] libmachine: (newest-cni-875270) DBG | exit 0
	I0717 19:01:03.397305   87211 main.go:141] libmachine: (newest-cni-875270) DBG | SSH cmd err, output: <nil>: 
	I0717 19:01:03.397572   87211 main.go:141] libmachine: (newest-cni-875270) KVM machine creation complete!
	I0717 19:01:03.397936   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetConfigRaw
	I0717 19:01:03.398410   87211 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:03.398573   87211 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:03.398773   87211 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0717 19:01:03.398786   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetState
	I0717 19:01:03.400277   87211 main.go:141] libmachine: Detecting operating system of created instance...
	I0717 19:01:03.400291   87211 main.go:141] libmachine: Waiting for SSH to be available...
	I0717 19:01:03.400298   87211 main.go:141] libmachine: Getting to WaitForSSH function...
	I0717 19:01:03.400306   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:03.402638   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.402955   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:03.402980   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.403110   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:03.403294   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:03.403445   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:03.403575   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:03.403755   87211 main.go:141] libmachine: Using SSH client type: native
	I0717 19:01:03.403984   87211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0717 19:01:03.404004   87211 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0717 19:01:03.511900   87211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:01:03.511923   87211 main.go:141] libmachine: Detecting the provisioner...
	I0717 19:01:03.511934   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:03.514960   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.515435   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:03.515463   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.515650   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:03.515866   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:03.516047   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:03.516256   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:03.516474   87211 main.go:141] libmachine: Using SSH client type: native
	I0717 19:01:03.516681   87211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0717 19:01:03.516695   87211 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0717 19:01:03.625869   87211 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0717 19:01:03.625961   87211 main.go:141] libmachine: found compatible host: buildroot
	I0717 19:01:03.625970   87211 main.go:141] libmachine: Provisioning with buildroot...
	I0717 19:01:03.625977   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetMachineName
	I0717 19:01:03.626219   87211 buildroot.go:166] provisioning hostname "newest-cni-875270"
	I0717 19:01:03.626248   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetMachineName
	I0717 19:01:03.626439   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:03.629103   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.629508   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:03.629545   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.629665   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:03.629933   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:03.630099   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:03.630213   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:03.630404   87211 main.go:141] libmachine: Using SSH client type: native
	I0717 19:01:03.630579   87211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0717 19:01:03.630592   87211 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-875270 && echo "newest-cni-875270" | sudo tee /etc/hostname
	I0717 19:01:03.758018   87211 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-875270
	
	I0717 19:01:03.758042   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:03.761202   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.761540   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:03.761579   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.761763   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:03.761940   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:03.762086   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:03.762245   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:03.762433   87211 main.go:141] libmachine: Using SSH client type: native
	I0717 19:01:03.762600   87211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0717 19:01:03.762615   87211 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-875270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-875270/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-875270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:01:03.880688   87211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:01:03.880713   87211 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 19:01:03.880748   87211 buildroot.go:174] setting up certificates
	I0717 19:01:03.880764   87211 provision.go:84] configureAuth start
	I0717 19:01:03.880779   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetMachineName
	I0717 19:01:03.881097   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetIP
	I0717 19:01:03.883837   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.884180   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:03.884227   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.884336   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:03.886414   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.886735   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:03.886762   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:03.886874   87211 provision.go:143] copyHostCerts
	I0717 19:01:03.886941   87211 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 19:01:03.886960   87211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 19:01:03.887039   87211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 19:01:03.887158   87211 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 19:01:03.887169   87211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 19:01:03.887200   87211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 19:01:03.887283   87211 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 19:01:03.887291   87211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 19:01:03.887319   87211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 19:01:03.887396   87211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.newest-cni-875270 san=[127.0.0.1 192.168.39.225 localhost minikube newest-cni-875270]
	I0717 19:01:04.011303   87211 provision.go:177] copyRemoteCerts
	I0717 19:01:04.011368   87211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:01:04.011391   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:04.014026   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.014343   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:04.014370   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.014602   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:04.014822   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:04.014990   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:04.015128   87211 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa Username:docker}
	I0717 19:01:04.102361   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 19:01:04.124063   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 19:01:04.147017   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:01:04.169249   87211 provision.go:87] duration metric: took 288.468883ms to configureAuth
	I0717 19:01:04.169278   87211 buildroot.go:189] setting minikube options for container-runtime
	I0717 19:01:04.169503   87211 config.go:182] Loaded profile config "newest-cni-875270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 19:01:04.169596   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:04.172252   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.172649   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:04.172677   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.172912   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:04.173105   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:04.173310   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:04.173469   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:04.173660   87211 main.go:141] libmachine: Using SSH client type: native
	I0717 19:01:04.173829   87211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0717 19:01:04.173843   87211 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:01:04.440831   87211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:01:04.440861   87211 main.go:141] libmachine: Checking connection to Docker...
	I0717 19:01:04.440872   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetURL
	I0717 19:01:04.442299   87211 main.go:141] libmachine: (newest-cni-875270) DBG | Using libvirt version 6000000
	I0717 19:01:04.444285   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.444658   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:04.444683   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.444977   87211 main.go:141] libmachine: Docker is up and running!
	I0717 19:01:04.444995   87211 main.go:141] libmachine: Reticulating splines...
	I0717 19:01:04.445003   87211 client.go:171] duration metric: took 20.374671388s to LocalClient.Create
	I0717 19:01:04.445027   87211 start.go:167] duration metric: took 20.374737242s to libmachine.API.Create "newest-cni-875270"
	I0717 19:01:04.445036   87211 start.go:293] postStartSetup for "newest-cni-875270" (driver="kvm2")
	I0717 19:01:04.445055   87211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:01:04.445084   87211 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:04.445328   87211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:01:04.445362   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:04.447372   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.447732   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:04.447755   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.447859   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:04.448020   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:04.448207   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:04.448381   87211 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa Username:docker}
	I0717 19:01:04.546741   87211 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:01:04.551229   87211 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 19:01:04.551253   87211 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 19:01:04.551307   87211 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 19:01:04.551379   87211 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 19:01:04.551502   87211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:01:04.561829   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 19:01:04.583250   87211 start.go:296] duration metric: took 138.201217ms for postStartSetup
	I0717 19:01:04.583315   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetConfigRaw
	I0717 19:01:04.583966   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetIP
	I0717 19:01:04.586592   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.586956   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:04.586985   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.587151   87211 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/config.json ...
	I0717 19:01:04.587317   87211 start.go:128] duration metric: took 20.53544076s to createHost
	I0717 19:01:04.587346   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:04.589462   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.589751   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:04.589776   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.589928   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:04.590094   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:04.590246   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:04.590401   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:04.590530   87211 main.go:141] libmachine: Using SSH client type: native
	I0717 19:01:04.590715   87211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0717 19:01:04.590728   87211 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 19:01:04.700972   87211 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721242864.675301137
	
	I0717 19:01:04.700996   87211 fix.go:216] guest clock: 1721242864.675301137
	I0717 19:01:04.701005   87211 fix.go:229] Guest: 2024-07-17 19:01:04.675301137 +0000 UTC Remote: 2024-07-17 19:01:04.587328142 +0000 UTC m=+20.642449515 (delta=87.972995ms)
	I0717 19:01:04.701028   87211 fix.go:200] guest clock delta is within tolerance: 87.972995ms
	I0717 19:01:04.701034   87211 start.go:83] releasing machines lock for "newest-cni-875270", held for 20.649236432s
	I0717 19:01:04.701055   87211 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:04.701305   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetIP
	I0717 19:01:04.703784   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.704239   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:04.704282   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.704373   87211 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:04.704970   87211 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:04.705137   87211 main.go:141] libmachine: (newest-cni-875270) Calling .DriverName
	I0717 19:01:04.705229   87211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:01:04.705280   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:04.705392   87211 ssh_runner.go:195] Run: cat /version.json
	I0717 19:01:04.705417   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHHostname
	I0717 19:01:04.708086   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.708289   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.708511   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:04.708538   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.708648   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:04.708666   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:04.708701   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:04.708893   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:04.708895   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHPort
	I0717 19:01:04.709061   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:04.709079   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHKeyPath
	I0717 19:01:04.709227   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetSSHUsername
	I0717 19:01:04.709242   87211 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa Username:docker}
	I0717 19:01:04.709368   87211 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/newest-cni-875270/id_rsa Username:docker}
	I0717 19:01:04.814255   87211 ssh_runner.go:195] Run: systemctl --version
	I0717 19:01:04.819893   87211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:01:04.980904   87211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 19:01:04.986213   87211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 19:01:04.986270   87211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:01:05.003712   87211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:01:05.003735   87211 start.go:495] detecting cgroup driver to use...
	I0717 19:01:05.003796   87211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:01:05.022481   87211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:01:05.035958   87211 docker.go:217] disabling cri-docker service (if available) ...
	I0717 19:01:05.036012   87211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:01:05.048839   87211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:01:05.061795   87211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:01:05.188401   87211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:01:05.347843   87211 docker.go:233] disabling docker service ...
	I0717 19:01:05.347910   87211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:01:05.361745   87211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:01:05.373754   87211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:01:05.493053   87211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:01:05.617766   87211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:01:05.630434   87211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:01:05.647294   87211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 19:01:05.647382   87211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:05.657271   87211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:01:05.657327   87211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:05.667368   87211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:05.676818   87211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:05.686933   87211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:01:05.696936   87211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:05.706783   87211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:05.722539   87211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:01:05.732262   87211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:01:05.741021   87211 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 19:01:05.741071   87211 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 19:01:05.752736   87211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:01:05.761372   87211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:01:05.887167   87211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:01:06.020433   87211 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:01:06.020510   87211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:01:06.025060   87211 start.go:563] Will wait 60s for crictl version
	I0717 19:01:06.025129   87211 ssh_runner.go:195] Run: which crictl
	I0717 19:01:06.028484   87211 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:01:06.070488   87211 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 19:01:06.070564   87211 ssh_runner.go:195] Run: crio --version
	I0717 19:01:06.098858   87211 ssh_runner.go:195] Run: crio --version
	I0717 19:01:06.126312   87211 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 19:01:06.127451   87211 main.go:141] libmachine: (newest-cni-875270) Calling .GetIP
	I0717 19:01:06.129978   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:06.130315   87211 main.go:141] libmachine: (newest-cni-875270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7e:1a", ip: ""} in network mk-newest-cni-875270: {Iface:virbr1 ExpiryTime:2024-07-17 20:00:57 +0000 UTC Type:0 Mac:52:54:00:2d:7e:1a Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:newest-cni-875270 Clientid:01:52:54:00:2d:7e:1a}
	I0717 19:01:06.130339   87211 main.go:141] libmachine: (newest-cni-875270) DBG | domain newest-cni-875270 has defined IP address 192.168.39.225 and MAC address 52:54:00:2d:7e:1a in network mk-newest-cni-875270
	I0717 19:01:06.130607   87211 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 19:01:06.134666   87211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:01:06.148267   87211 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0717 19:01:06.149396   87211 kubeadm.go:883] updating cluster {Name:newest-cni-875270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-875270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 19:01:06.149507   87211 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 19:01:06.149557   87211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:01:06.179313   87211 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 19:01:06.179384   87211 ssh_runner.go:195] Run: which lz4
	I0717 19:01:06.183049   87211 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:01:06.186895   87211 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:01:06.186923   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0717 19:01:07.412191   87211 crio.go:462] duration metric: took 1.229171554s to copy over tarball
	I0717 19:01:07.412264   87211 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:01:09.367899   87211 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.955604992s)
	I0717 19:01:09.367940   87211 crio.go:469] duration metric: took 1.955722225s to extract the tarball
	I0717 19:01:09.367951   87211 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:01:09.403190   87211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:01:09.444349   87211 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 19:01:09.444372   87211 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:01:09.444380   87211 kubeadm.go:934] updating node { 192.168.39.225 8443 v1.31.0-beta.0 crio true true} ...
	I0717 19:01:09.444486   87211 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-875270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-875270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 19:01:09.444574   87211 ssh_runner.go:195] Run: crio config
	I0717 19:01:09.493946   87211 cni.go:84] Creating CNI manager for ""
	I0717 19:01:09.493974   87211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:01:09.493986   87211 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0717 19:01:09.494006   87211 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-875270 NodeName:newest-cni-875270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:01:09.494146   87211 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-875270"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:01:09.494202   87211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 19:01:09.504815   87211 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:01:09.504876   87211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:01:09.515098   87211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0717 19:01:09.531842   87211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 19:01:09.548700   87211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0717 19:01:09.563708   87211 ssh_runner.go:195] Run: grep 192.168.39.225	control-plane.minikube.internal$ /etc/hosts
	I0717 19:01:09.567173   87211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:01:09.578550   87211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:01:09.711336   87211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 19:01:09.727438   87211 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270 for IP: 192.168.39.225
	I0717 19:01:09.727460   87211 certs.go:194] generating shared ca certs ...
	I0717 19:01:09.727478   87211 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:01:09.727619   87211 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 19:01:09.727670   87211 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 19:01:09.727683   87211 certs.go:256] generating profile certs ...
	I0717 19:01:09.727743   87211 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/client.key
	I0717 19:01:09.727759   87211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/client.crt with IP's: []
	I0717 19:01:09.844800   87211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/client.crt ...
	I0717 19:01:09.844828   87211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/client.crt: {Name:mkd285eb8b7b8cdd63d224fd5b394e06c04e7a90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:01:09.845051   87211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/client.key ...
	I0717 19:01:09.845073   87211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/client.key: {Name:mkf603f911d7f3afbe2147ebf64899a752d01743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:01:09.845187   87211 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.key.b86eadd9
	I0717 19:01:09.845203   87211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.crt.b86eadd9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225]
	I0717 19:01:09.899025   87211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.crt.b86eadd9 ...
	I0717 19:01:09.899050   87211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.crt.b86eadd9: {Name:mk60f98f1b53f4eadfc8f3a9638c42248c30ff06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:01:09.899204   87211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.key.b86eadd9 ...
	I0717 19:01:09.899216   87211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.key.b86eadd9: {Name:mkf362de1811c84f126929018bed3d55bb9052e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:01:09.899282   87211 certs.go:381] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.crt.b86eadd9 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.crt
	I0717 19:01:09.899369   87211 certs.go:385] copying /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.key.b86eadd9 -> /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.key
	I0717 19:01:09.899425   87211 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/proxy-client.key
	I0717 19:01:09.899440   87211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/proxy-client.crt with IP's: []
	I0717 19:01:10.002042   87211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/proxy-client.crt ...
	I0717 19:01:10.002070   87211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/proxy-client.crt: {Name:mk8d23ae0e522c8292b1dbe045dad4d2ffb00e6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:01:10.002235   87211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/proxy-client.key ...
	I0717 19:01:10.002246   87211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/proxy-client.key: {Name:mkcb4fdc3db4725b9a81e47f163587ca18d1b696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:01:10.002412   87211 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 19:01:10.002453   87211 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 19:01:10.002484   87211 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 19:01:10.002515   87211 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 19:01:10.002541   87211 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:01:10.002563   87211 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 19:01:10.002599   87211 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 19:01:10.003226   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:01:10.027348   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 19:01:10.049006   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:01:10.072568   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 19:01:10.094791   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 19:01:10.117068   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:01:10.138690   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:01:10.160877   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/newest-cni-875270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:01:10.182514   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:01:10.204581   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 19:01:10.226328   87211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 19:01:10.247732   87211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:01:10.261992   87211 ssh_runner.go:195] Run: openssl version
	I0717 19:01:10.266880   87211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:01:10.276330   87211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:01:10.280023   87211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:01:10.280070   87211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:01:10.285046   87211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:01:10.294668   87211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 19:01:10.304247   87211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 19:01:10.308121   87211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 19:01:10.308165   87211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 19:01:10.313572   87211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 19:01:10.324815   87211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 19:01:10.334920   87211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 19:01:10.338847   87211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 19:01:10.338916   87211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 19:01:10.344279   87211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:01:10.355629   87211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 19:01:10.359137   87211 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 19:01:10.359185   87211 kubeadm.go:392] StartCluster: {Name:newest-cni-875270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-875270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 19:01:10.359257   87211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:01:10.359303   87211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:01:10.396578   87211 cri.go:89] found id: ""
	I0717 19:01:10.396654   87211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:01:10.405906   87211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:01:10.414705   87211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:01:10.424217   87211 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:01:10.424234   87211 kubeadm.go:157] found existing configuration files:
	
	I0717 19:01:10.424271   87211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:01:10.433013   87211 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 19:01:10.433071   87211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 19:01:10.442652   87211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:01:10.454936   87211 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 19:01:10.455021   87211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 19:01:10.470230   87211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:01:10.486691   87211 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 19:01:10.486739   87211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:01:10.500877   87211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:01:10.518797   87211 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 19:01:10.518858   87211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:01:10.528249   87211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 19:01:10.629310   87211 kubeadm.go:310] W0717 19:01:10.610434     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 19:01:10.630099   87211 kubeadm.go:310] W0717 19:01:10.611321     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 19:01:10.727686   87211 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:01:21.278442   87211 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 19:01:21.278512   87211 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 19:01:21.278614   87211 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:01:21.278726   87211 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:01:21.278821   87211 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 19:01:21.278913   87211 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:01:21.280570   87211 out.go:204]   - Generating certificates and keys ...
	I0717 19:01:21.280666   87211 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 19:01:21.280737   87211 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 19:01:21.280821   87211 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 19:01:21.280907   87211 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 19:01:21.280994   87211 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 19:01:21.281081   87211 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 19:01:21.281166   87211 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 19:01:21.281337   87211 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-875270] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0717 19:01:21.281403   87211 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 19:01:21.281612   87211 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-875270] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0717 19:01:21.281693   87211 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 19:01:21.281869   87211 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 19:01:21.281946   87211 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 19:01:21.282014   87211 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:01:21.282058   87211 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:01:21.282106   87211 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 19:01:21.282178   87211 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:01:21.282279   87211 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:01:21.282349   87211 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:01:21.282434   87211 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:01:21.282530   87211 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:01:21.284130   87211 out.go:204]   - Booting up control plane ...
	I0717 19:01:21.284246   87211 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:01:21.284359   87211 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:01:21.284447   87211 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:01:21.284574   87211 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:01:21.284687   87211 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:01:21.284736   87211 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 19:01:21.284912   87211 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 19:01:21.285024   87211 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 19:01:21.285092   87211 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.585125ms
	I0717 19:01:21.285181   87211 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 19:01:21.285253   87211 kubeadm.go:310] [api-check] The API server is healthy after 5.501875791s
	I0717 19:01:21.285397   87211 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:01:21.285544   87211 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:01:21.285633   87211 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:01:21.285830   87211 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-875270 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:01:21.285906   87211 kubeadm.go:310] [bootstrap-token] Using token: wj3eba.70abwb60ifsnrd1d
	I0717 19:01:21.287433   87211 out.go:204]   - Configuring RBAC rules ...
	I0717 19:01:21.287547   87211 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:01:21.287625   87211 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:01:21.287744   87211 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:01:21.287853   87211 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:01:21.287947   87211 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:01:21.288023   87211 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:01:21.288135   87211 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:01:21.288180   87211 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 19:01:21.288226   87211 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 19:01:21.288237   87211 kubeadm.go:310] 
	I0717 19:01:21.288286   87211 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 19:01:21.288292   87211 kubeadm.go:310] 
	I0717 19:01:21.288363   87211 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 19:01:21.288369   87211 kubeadm.go:310] 
	I0717 19:01:21.288393   87211 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 19:01:21.288448   87211 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:01:21.288508   87211 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:01:21.288515   87211 kubeadm.go:310] 
	I0717 19:01:21.288559   87211 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 19:01:21.288565   87211 kubeadm.go:310] 
	I0717 19:01:21.288611   87211 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:01:21.288618   87211 kubeadm.go:310] 
	I0717 19:01:21.288665   87211 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 19:01:21.288735   87211 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:01:21.288792   87211 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:01:21.288798   87211 kubeadm.go:310] 
	I0717 19:01:21.288871   87211 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:01:21.288961   87211 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 19:01:21.288975   87211 kubeadm.go:310] 
	I0717 19:01:21.289051   87211 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wj3eba.70abwb60ifsnrd1d \
	I0717 19:01:21.289143   87211 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 19:01:21.289163   87211 kubeadm.go:310] 	--control-plane 
	I0717 19:01:21.289166   87211 kubeadm.go:310] 
	I0717 19:01:21.289241   87211 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:01:21.289246   87211 kubeadm.go:310] 
	I0717 19:01:21.289333   87211 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wj3eba.70abwb60ifsnrd1d \
	I0717 19:01:21.289505   87211 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 19:01:21.289518   87211 cni.go:84] Creating CNI manager for ""
	I0717 19:01:21.289527   87211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 19:01:21.291039   87211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.423094127Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242883423065866,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27c96882-c3b5-4c4c-ae8e-c663a619de74 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.423582941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed38f0ac-6d5f-46bf-aebf-f7d8b0c32529 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.423669720Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed38f0ac-6d5f-46bf-aebf-f7d8b0c32529 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.423943721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084cde7459c3484dc827fb95bcba3e12f9f645203aaf24df4adca837533190a1,PodSandboxId:743b7d635cfdb1f479dcbe06e739415f139acab6b4527d1bac8eb85bcc144aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241977665371070,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f473bbe-0727-4f25-ba39-4ed322767465,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9339a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:235f37418508acb48fcc568777f192c2d4ff408bb07c34e60ff528dea9b3d667,PodSandboxId:2d69fcb8f2d1d58f23a79b7f3659cd09bebcdb6921c894f9a1b0e97ad7d5bccd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241977022783713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f64kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0de6ef4-1402-44b2-81f3-3f234a72d151,},Annotations:map[string]string{io.kubernetes.container.hash: 3015fca6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a6b37c689d268a287e557e56dbd0f797e5b2a3730aa6ebd8af87140cc7730a,PodSandboxId:aa9485aab31cf0542a265efeef3a4cc43ef650a004ed8acd7bf72b539cba793c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241976797489685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2zt8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
2e90bb-5721-4ca8-8177-77e6b686175a,},Annotations:map[string]string{io.kubernetes.container.hash: 281e4adb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8211fc3362773d22258f266fb6992dc3a1cd5e4c663ba81a4bff531da4f7a47b,PodSandboxId:ff640218fc03e161303717a0241a423a64dcbadce452bcff096c2b57aed7283c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1721241976049738936,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m52fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40f99883-b343-43b3-8f94-4b45b379a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 8937d2b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11369f730c593193e0a51ab3b1884ff6e0c4427208f7684a4848d89cbca3f6f,PodSandboxId:5d69b061b22086c6bddfd20a559c7fad2550ac962a582276b8ce7bb41c7e5376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241956671166428,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4839e942333313189ee9d179d15c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f8de2d1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b10cf1d32d7f3c017da8f0dbe36f599bdb5ff6b6311bb8990129bcf1cec6dd,PodSandboxId:d407d3c1ec4c739b654836a85a28e210df7b8a51d487b1b5b38ae32abb07b809,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241956644669760,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76b685168398b77d009e8c3f7a3fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449290763e32cf4c1846fddcb73f1114ef0063c64231695d6e179e78ee4df22,PodSandboxId:417d02f26b483ae5dfd01e5b0408303e45e35d0525ad82700a1fe65c52de8f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241956614625350,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b289dab6de17ab6177769769972038a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d41b06ad4b2ab53745589288ead395180d7211c2722000c8cd8a00c52ea336a,PodSandboxId:92ac90338726334044e2ca283e436c9a37604b2fcd2671112fbfaecbd3632fb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241956550359305,Labels:map[string]string{io.kubernetes.container.
name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49c409df9a954b7691247df1c8d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 24f8465e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b10dddaf722511ea0efce15f066ecda5c95b478b728ec1ae9bd372d21694007,PodSandboxId:ad2e057203c1d3a5178ca18241e263f8e713c572996edf3324f709e7a51a81f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241666192293722,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49c409df9a954b7691247df1c8d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 24f8465e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed38f0ac-6d5f-46bf-aebf-f7d8b0c32529 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.463712299Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d0f14f8-0e25-469a-be58-248bddce383f name=/runtime.v1.RuntimeService/Version
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.463787759Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d0f14f8-0e25-469a-be58-248bddce383f name=/runtime.v1.RuntimeService/Version
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.465329436Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f282d548-b522-4012-9b60-2192a1a83b64 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.465701435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242883465680836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f282d548-b522-4012-9b60-2192a1a83b64 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.466465807Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4df9ea70-daf8-40ca-8155-a2665ea2fa22 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.466529883Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4df9ea70-daf8-40ca-8155-a2665ea2fa22 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.466719728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084cde7459c3484dc827fb95bcba3e12f9f645203aaf24df4adca837533190a1,PodSandboxId:743b7d635cfdb1f479dcbe06e739415f139acab6b4527d1bac8eb85bcc144aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241977665371070,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f473bbe-0727-4f25-ba39-4ed322767465,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9339a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:235f37418508acb48fcc568777f192c2d4ff408bb07c34e60ff528dea9b3d667,PodSandboxId:2d69fcb8f2d1d58f23a79b7f3659cd09bebcdb6921c894f9a1b0e97ad7d5bccd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241977022783713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f64kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0de6ef4-1402-44b2-81f3-3f234a72d151,},Annotations:map[string]string{io.kubernetes.container.hash: 3015fca6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a6b37c689d268a287e557e56dbd0f797e5b2a3730aa6ebd8af87140cc7730a,PodSandboxId:aa9485aab31cf0542a265efeef3a4cc43ef650a004ed8acd7bf72b539cba793c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241976797489685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2zt8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
2e90bb-5721-4ca8-8177-77e6b686175a,},Annotations:map[string]string{io.kubernetes.container.hash: 281e4adb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8211fc3362773d22258f266fb6992dc3a1cd5e4c663ba81a4bff531da4f7a47b,PodSandboxId:ff640218fc03e161303717a0241a423a64dcbadce452bcff096c2b57aed7283c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1721241976049738936,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m52fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40f99883-b343-43b3-8f94-4b45b379a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 8937d2b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11369f730c593193e0a51ab3b1884ff6e0c4427208f7684a4848d89cbca3f6f,PodSandboxId:5d69b061b22086c6bddfd20a559c7fad2550ac962a582276b8ce7bb41c7e5376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241956671166428,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4839e942333313189ee9d179d15c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f8de2d1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b10cf1d32d7f3c017da8f0dbe36f599bdb5ff6b6311bb8990129bcf1cec6dd,PodSandboxId:d407d3c1ec4c739b654836a85a28e210df7b8a51d487b1b5b38ae32abb07b809,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241956644669760,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76b685168398b77d009e8c3f7a3fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449290763e32cf4c1846fddcb73f1114ef0063c64231695d6e179e78ee4df22,PodSandboxId:417d02f26b483ae5dfd01e5b0408303e45e35d0525ad82700a1fe65c52de8f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241956614625350,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b289dab6de17ab6177769769972038a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d41b06ad4b2ab53745589288ead395180d7211c2722000c8cd8a00c52ea336a,PodSandboxId:92ac90338726334044e2ca283e436c9a37604b2fcd2671112fbfaecbd3632fb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241956550359305,Labels:map[string]string{io.kubernetes.container.
name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49c409df9a954b7691247df1c8d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 24f8465e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b10dddaf722511ea0efce15f066ecda5c95b478b728ec1ae9bd372d21694007,PodSandboxId:ad2e057203c1d3a5178ca18241e263f8e713c572996edf3324f709e7a51a81f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241666192293722,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49c409df9a954b7691247df1c8d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 24f8465e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4df9ea70-daf8-40ca-8155-a2665ea2fa22 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.505872840Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27b2a0cc-753b-4a5b-9384-288d7e51ed34 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.505958499Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27b2a0cc-753b-4a5b-9384-288d7e51ed34 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.507224744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=396cc912-efa7-4791-9f00-da1925b6331a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.507663390Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242883507640006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=396cc912-efa7-4791-9f00-da1925b6331a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.508280988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b0717a8-d7cc-4522-9720-c18f4b91acdc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.508338079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b0717a8-d7cc-4522-9720-c18f4b91acdc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.508569391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084cde7459c3484dc827fb95bcba3e12f9f645203aaf24df4adca837533190a1,PodSandboxId:743b7d635cfdb1f479dcbe06e739415f139acab6b4527d1bac8eb85bcc144aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241977665371070,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f473bbe-0727-4f25-ba39-4ed322767465,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9339a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:235f37418508acb48fcc568777f192c2d4ff408bb07c34e60ff528dea9b3d667,PodSandboxId:2d69fcb8f2d1d58f23a79b7f3659cd09bebcdb6921c894f9a1b0e97ad7d5bccd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241977022783713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f64kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0de6ef4-1402-44b2-81f3-3f234a72d151,},Annotations:map[string]string{io.kubernetes.container.hash: 3015fca6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a6b37c689d268a287e557e56dbd0f797e5b2a3730aa6ebd8af87140cc7730a,PodSandboxId:aa9485aab31cf0542a265efeef3a4cc43ef650a004ed8acd7bf72b539cba793c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241976797489685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2zt8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
2e90bb-5721-4ca8-8177-77e6b686175a,},Annotations:map[string]string{io.kubernetes.container.hash: 281e4adb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8211fc3362773d22258f266fb6992dc3a1cd5e4c663ba81a4bff531da4f7a47b,PodSandboxId:ff640218fc03e161303717a0241a423a64dcbadce452bcff096c2b57aed7283c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1721241976049738936,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m52fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40f99883-b343-43b3-8f94-4b45b379a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 8937d2b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11369f730c593193e0a51ab3b1884ff6e0c4427208f7684a4848d89cbca3f6f,PodSandboxId:5d69b061b22086c6bddfd20a559c7fad2550ac962a582276b8ce7bb41c7e5376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241956671166428,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4839e942333313189ee9d179d15c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f8de2d1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b10cf1d32d7f3c017da8f0dbe36f599bdb5ff6b6311bb8990129bcf1cec6dd,PodSandboxId:d407d3c1ec4c739b654836a85a28e210df7b8a51d487b1b5b38ae32abb07b809,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241956644669760,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76b685168398b77d009e8c3f7a3fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449290763e32cf4c1846fddcb73f1114ef0063c64231695d6e179e78ee4df22,PodSandboxId:417d02f26b483ae5dfd01e5b0408303e45e35d0525ad82700a1fe65c52de8f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241956614625350,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b289dab6de17ab6177769769972038a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d41b06ad4b2ab53745589288ead395180d7211c2722000c8cd8a00c52ea336a,PodSandboxId:92ac90338726334044e2ca283e436c9a37604b2fcd2671112fbfaecbd3632fb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241956550359305,Labels:map[string]string{io.kubernetes.container.
name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49c409df9a954b7691247df1c8d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 24f8465e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b10dddaf722511ea0efce15f066ecda5c95b478b728ec1ae9bd372d21694007,PodSandboxId:ad2e057203c1d3a5178ca18241e263f8e713c572996edf3324f709e7a51a81f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241666192293722,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49c409df9a954b7691247df1c8d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 24f8465e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b0717a8-d7cc-4522-9720-c18f4b91acdc name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.543700364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e421049-4d27-4e14-b836-42ed4ff9d6aa name=/runtime.v1.RuntimeService/Version
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.543770498Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e421049-4d27-4e14-b836-42ed4ff9d6aa name=/runtime.v1.RuntimeService/Version
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.545013893Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50850ce5-340b-45cf-94f4-d46e9da90106 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.545391317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242883545371889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133282,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50850ce5-340b-45cf-94f4-d46e9da90106 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.546034513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4186d65-90db-4875-ab36-9745647fc1d9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.546085337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4186d65-90db-4875-ab36-9745647fc1d9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:01:23 embed-certs-527415 crio[724]: time="2024-07-17 19:01:23.546299635Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:084cde7459c3484dc827fb95bcba3e12f9f645203aaf24df4adca837533190a1,PodSandboxId:743b7d635cfdb1f479dcbe06e739415f139acab6b4527d1bac8eb85bcc144aab,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721241977665371070,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f473bbe-0727-4f25-ba39-4ed322767465,},Annotations:map[string]string{io.kubernetes.container.hash: 1c9339a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:235f37418508acb48fcc568777f192c2d4ff408bb07c34e60ff528dea9b3d667,PodSandboxId:2d69fcb8f2d1d58f23a79b7f3659cd09bebcdb6921c894f9a1b0e97ad7d5bccd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241977022783713,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f64kh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0de6ef4-1402-44b2-81f3-3f234a72d151,},Annotations:map[string]string{io.kubernetes.container.hash: 3015fca6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a6b37c689d268a287e557e56dbd0f797e5b2a3730aa6ebd8af87140cc7730a,PodSandboxId:aa9485aab31cf0542a265efeef3a4cc43ef650a004ed8acd7bf72b539cba793c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721241976797489685,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2zt8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e
2e90bb-5721-4ca8-8177-77e6b686175a,},Annotations:map[string]string{io.kubernetes.container.hash: 281e4adb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8211fc3362773d22258f266fb6992dc3a1cd5e4c663ba81a4bff531da4f7a47b,PodSandboxId:ff640218fc03e161303717a0241a423a64dcbadce452bcff096c2b57aed7283c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,State:CONTAINER_RUNNING,CreatedAt:
1721241976049738936,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m52fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40f99883-b343-43b3-8f94-4b45b379a17b,},Annotations:map[string]string{io.kubernetes.container.hash: 8937d2b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11369f730c593193e0a51ab3b1884ff6e0c4427208f7684a4848d89cbca3f6f,PodSandboxId:5d69b061b22086c6bddfd20a559c7fad2550ac962a582276b8ce7bb41c7e5376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721241956671166428,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4839e942333313189ee9d179d15c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3f8de2d1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b10cf1d32d7f3c017da8f0dbe36f599bdb5ff6b6311bb8990129bcf1cec6dd,PodSandboxId:d407d3c1ec4c739b654836a85a28e210df7b8a51d487b1b5b38ae32abb07b809,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,State:CONTAINER_RUNNING,CreatedAt:1721241956644669760,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b76b685168398b77d009e8c3f7a3fe87,},Annotations:map[string]string{io.kubernetes.container.hash: 838e9a2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e449290763e32cf4c1846fddcb73f1114ef0063c64231695d6e179e78ee4df22,PodSandboxId:417d02f26b483ae5dfd01e5b0408303e45e35d0525ad82700a1fe65c52de8f31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,State:CONTAINER_RUNNING,CreatedAt:1721241956614625350,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b289dab6de17ab6177769769972038a4,},Annotations:map[string]string{io.kubernetes.container.hash: 7bcc7ce4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d41b06ad4b2ab53745589288ead395180d7211c2722000c8cd8a00c52ea336a,PodSandboxId:92ac90338726334044e2ca283e436c9a37604b2fcd2671112fbfaecbd3632fb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_RUNNING,CreatedAt:1721241956550359305,Labels:map[string]string{io.kubernetes.container.
name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49c409df9a954b7691247df1c8d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 24f8465e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b10dddaf722511ea0efce15f066ecda5c95b478b728ec1ae9bd372d21694007,PodSandboxId:ad2e057203c1d3a5178ca18241e263f8e713c572996edf3324f709e7a51a81f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,State:CONTAINER_EXITED,CreatedAt:1721241666192293722,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-527415,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d49c409df9a954b7691247df1c8d9f62,},Annotations:map[string]string{io.kubernetes.container.hash: 24f8465e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4186d65-90db-4875-ab36-9745647fc1d9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	084cde7459c34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   743b7d635cfdb       storage-provisioner
	235f37418508a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   2d69fcb8f2d1d       coredns-7db6d8ff4d-f64kh
	38a6b37c689d2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   aa9485aab31cf       coredns-7db6d8ff4d-2zt8k
	8211fc3362773       53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772   15 minutes ago      Running             kube-proxy                0                   ff640218fc03e       kube-proxy-m52fq
	f11369f730c59       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   5d69b061b2208       etcd-embed-certs-527415
	55b10cf1d32d7       7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940   15 minutes ago      Running             kube-scheduler            2                   d407d3c1ec4c7       kube-scheduler-embed-certs-527415
	e449290763e32       e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974   15 minutes ago      Running             kube-controller-manager   2                   417d02f26b483       kube-controller-manager-embed-certs-527415
	1d41b06ad4b2a       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   15 minutes ago      Running             kube-apiserver            2                   92ac903387263       kube-apiserver-embed-certs-527415
	2b10dddaf7225       56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe   20 minutes ago      Exited              kube-apiserver            1                   ad2e057203c1d       kube-apiserver-embed-certs-527415
	
	
	==> coredns [235f37418508acb48fcc568777f192c2d4ff408bb07c34e60ff528dea9b3d667] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [38a6b37c689d268a287e557e56dbd0f797e5b2a3730aa6ebd8af87140cc7730a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-527415
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-527415
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
	                    minikube.k8s.io/name=embed-certs-527415
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T18_46_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 18:45:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-527415
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 19:01:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 18:56:34 +0000   Wed, 17 Jul 2024 18:45:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 18:56:34 +0000   Wed, 17 Jul 2024 18:45:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 18:56:34 +0000   Wed, 17 Jul 2024 18:45:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 18:56:34 +0000   Wed, 17 Jul 2024 18:45:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.90
	  Hostname:    embed-certs-527415
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 87ee6cbc85374f4bbc0c06e2cbb3cc08
	  System UUID:                87ee6cbc-8537-4f4b-bc0c-06e2cbb3cc08
	  Boot ID:                    2c1ee72b-5496-4ad4-827f-43db07eaa370
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2zt8k                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-f64kh                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-527415                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-527415             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-527415    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-m52fq                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-527415             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-hvxtg               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-527415 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-527415 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-527415 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-527415 event: Registered Node embed-certs-527415 in Controller
	
	
	==> dmesg <==
	[  +0.039395] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.618263] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.823078] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.562158] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.990290] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.060111] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068930] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.183058] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.144409] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.266915] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[Jul17 18:41] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +1.889242] systemd-fstab-generator[927]: Ignoring "noauto" option for root device
	[  +0.062997] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.507151] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.274568] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.886694] kauditd_printk_skb: 27 callbacks suppressed
	[Jul17 18:45] systemd-fstab-generator[3574]: Ignoring "noauto" option for root device
	[  +0.070249] kauditd_printk_skb: 9 callbacks suppressed
	[Jul17 18:46] systemd-fstab-generator[3897]: Ignoring "noauto" option for root device
	[  +0.081317] kauditd_printk_skb: 54 callbacks suppressed
	[ +14.289528] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.026696] systemd-fstab-generator[4118]: Ignoring "noauto" option for root device
	[Jul17 18:47] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [f11369f730c593193e0a51ab3b1884ff6e0c4427208f7684a4848d89cbca3f6f] <==
	{"level":"info","ts":"2024-07-17T18:45:57.162079Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:57.164099Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"70b1d9345947c0fd","local-member-attributes":"{Name:embed-certs-527415 ClientURLs:[https://192.168.61.90:2379]}","request-path":"/0/members/70b1d9345947c0fd/attributes","cluster-id":"b1c18dc80f06de23","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T18:45:57.165374Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:45:57.165481Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b1c18dc80f06de23","local-member-id":"70b1d9345947c0fd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:57.165612Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:57.167835Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T18:45:57.16573Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T18:45:57.169584Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T18:45:57.17388Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T18:45:57.17391Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T18:45:57.179436Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.90:2379"}
	{"level":"info","ts":"2024-07-17T18:55:57.873273Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":673}
	{"level":"info","ts":"2024-07-17T18:55:57.881634Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":673,"took":"8.008878ms","hash":758100020,"current-db-size-bytes":2158592,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2158592,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-17T18:55:57.881688Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":758100020,"revision":673,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T19:00:57.884077Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":915}
	{"level":"info","ts":"2024-07-17T19:00:57.888172Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":915,"took":"3.36791ms","hash":209590036,"current-db-size-bytes":2158592,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1519616,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-07-17T19:00:57.888249Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":209590036,"revision":915,"compact-revision":673}
	{"level":"warn","ts":"2024-07-17T19:01:11.398577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.404611ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13906430387354235673 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.90\" mod_revision:1163 > success:<request_put:<key:\"/registry/masterleases/192.168.61.90\" value_size:66 lease:4683058350499459863 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.90\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T19:01:11.398886Z","caller":"traceutil/trace.go:171","msg":"trace[668483660] transaction","detail":"{read_only:false; response_revision:1171; number_of_response:1; }","duration":"249.892777ms","start":"2024-07-17T19:01:11.148956Z","end":"2024-07-17T19:01:11.398848Z","steps":["trace[668483660] 'process raft request'  (duration: 126.501193ms)","trace[668483660] 'compare'  (duration: 122.231612ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T19:01:12.552578Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.142604ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:01:12.55272Z","caller":"traceutil/trace.go:171","msg":"trace[1645211917] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1172; }","duration":"120.327669ms","start":"2024-07-17T19:01:12.432375Z","end":"2024-07-17T19:01:12.552703Z","steps":["trace[1645211917] 'range keys from in-memory index tree'  (duration: 119.991925ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:01:14.42435Z","caller":"traceutil/trace.go:171","msg":"trace[72317012] linearizableReadLoop","detail":"{readStateIndex:1370; appliedIndex:1369; }","duration":"108.421445ms","start":"2024-07-17T19:01:14.315915Z","end":"2024-07-17T19:01:14.424337Z","steps":["trace[72317012] 'read index received'  (duration: 108.279707ms)","trace[72317012] 'applied index is now lower than readState.Index'  (duration: 141.118µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T19:01:14.424465Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.530317ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-17T19:01:14.424487Z","caller":"traceutil/trace.go:171","msg":"trace[2032368489] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1173; }","duration":"108.590542ms","start":"2024-07-17T19:01:14.315891Z","end":"2024-07-17T19:01:14.424481Z","steps":["trace[2032368489] 'agreement among raft nodes before linearized reading'  (duration: 108.514505ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T19:01:14.424666Z","caller":"traceutil/trace.go:171","msg":"trace[1853306326] transaction","detail":"{read_only:false; response_revision:1173; number_of_response:1; }","duration":"123.500156ms","start":"2024-07-17T19:01:14.301153Z","end":"2024-07-17T19:01:14.424653Z","steps":["trace[1853306326] 'process raft request'  (duration: 123.084159ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:01:23 up 20 min,  0 users,  load average: 0.09, 0.24, 0.19
	Linux embed-certs-527415 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1d41b06ad4b2ab53745589288ead395180d7211c2722000c8cd8a00c52ea336a] <==
	I0717 18:56:00.405888       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:57:00.405392       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:57:00.405604       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 18:57:00.405632       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:57:00.406883       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:57:00.406912       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 18:57:00.406919       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:59:00.406728       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:59:00.406887       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 18:59:00.406896       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 18:59:00.407924       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 18:59:00.407988       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 18:59:00.408015       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:00:59.408994       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:00:59.409119       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0717 19:01:00.409914       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:01:00.409962       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 19:01:00.409972       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 19:01:00.410028       1 handler_proxy.go:93] no RequestInfo found in the context
	E0717 19:01:00.410077       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 19:01:00.411252       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [2b10dddaf722511ea0efce15f066ecda5c95b478b728ec1ae9bd372d21694007] <==
	W0717 18:45:52.356616       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.373072       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.397268       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.465792       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.477978       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.490414       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.499468       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.512674       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.572650       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.576011       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.674197       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.682177       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.711295       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.868716       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:52.983075       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.029255       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.074631       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.167116       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.171576       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.293438       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.352096       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.387176       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.469503       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.483094       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 18:45:53.497871       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [e449290763e32cf4c1846fddcb73f1114ef0063c64231695d6e179e78ee4df22] <==
	I0717 18:55:46.272589       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:56:15.801390       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:56:16.281187       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:56:45.806701       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:56:46.289461       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:57:15.811794       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:57:16.297787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0717 18:57:24.892679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="100.086µs"
	I0717 18:57:39.897558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="106.247µs"
	E0717 18:57:45.816311       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:57:46.304685       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:58:15.822531       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:58:16.313935       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:58:45.827603       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:58:46.321509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:59:15.831983       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:59:16.329344       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 18:59:45.837081       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 18:59:46.336896       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:00:15.842275       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:00:16.346845       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:00:45.848676       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:00:46.356070       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0717 19:01:15.854154       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0717 19:01:16.364101       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8211fc3362773d22258f266fb6992dc3a1cd5e4c663ba81a4bff531da4f7a47b] <==
	I0717 18:46:16.320393       1 server_linux.go:69] "Using iptables proxy"
	I0717 18:46:16.345033       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.90"]
	I0717 18:46:16.412948       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0717 18:46:16.412997       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0717 18:46:16.413014       1 server_linux.go:165] "Using iptables Proxier"
	I0717 18:46:16.416300       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:46:16.416517       1 server.go:872] "Version info" version="v1.30.2"
	I0717 18:46:16.416542       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:46:16.418236       1 config.go:192] "Starting service config controller"
	I0717 18:46:16.418265       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 18:46:16.418302       1 config.go:101] "Starting endpoint slice config controller"
	I0717 18:46:16.418307       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 18:46:16.422744       1 config.go:319] "Starting node config controller"
	I0717 18:46:16.422756       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 18:46:16.518874       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 18:46:16.518913       1 shared_informer.go:320] Caches are synced for service config
	I0717 18:46:16.522982       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [55b10cf1d32d7f3c017da8f0dbe36f599bdb5ff6b6311bb8990129bcf1cec6dd] <==
	W0717 18:45:59.439651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:45:59.439689       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 18:45:59.439591       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:45:59.439709       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 18:45:59.439470       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:45:59.439723       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 18:45:59.439860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:45:59.439939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:46:00.262348       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:46:00.262473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:46:00.294856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:46:00.294962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 18:46:00.360997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:46:00.361558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 18:46:00.395931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:46:00.395988       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:46:00.419385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:46:00.419685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:46:00.433792       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:46:00.433862       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:46:00.454389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:46:00.454468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 18:46:00.593281       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:46:00.593364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0717 18:46:01.030632       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 18:59:01 embed-certs-527415 kubelet[3904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 18:59:01 embed-certs-527415 kubelet[3904]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 18:59:01 embed-certs-527415 kubelet[3904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 18:59:01 embed-certs-527415 kubelet[3904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 18:59:11 embed-certs-527415 kubelet[3904]: E0717 18:59:11.881178    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:59:22 embed-certs-527415 kubelet[3904]: E0717 18:59:22.878879    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:59:34 embed-certs-527415 kubelet[3904]: E0717 18:59:34.878854    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 18:59:46 embed-certs-527415 kubelet[3904]: E0717 18:59:46.878602    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 19:00:00 embed-certs-527415 kubelet[3904]: E0717 19:00:00.878371    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 19:00:01 embed-certs-527415 kubelet[3904]: E0717 19:00:01.905501    3904 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:00:01 embed-certs-527415 kubelet[3904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:00:01 embed-certs-527415 kubelet[3904]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:00:01 embed-certs-527415 kubelet[3904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:00:01 embed-certs-527415 kubelet[3904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:00:13 embed-certs-527415 kubelet[3904]: E0717 19:00:13.878460    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 19:00:26 embed-certs-527415 kubelet[3904]: E0717 19:00:26.879259    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 19:00:40 embed-certs-527415 kubelet[3904]: E0717 19:00:40.878779    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 19:00:54 embed-certs-527415 kubelet[3904]: E0717 19:00:54.879752    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 19:01:01 embed-certs-527415 kubelet[3904]: E0717 19:01:01.904789    3904 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 17 19:01:01 embed-certs-527415 kubelet[3904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 17 19:01:01 embed-certs-527415 kubelet[3904]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 17 19:01:01 embed-certs-527415 kubelet[3904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 17 19:01:01 embed-certs-527415 kubelet[3904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 17 19:01:09 embed-certs-527415 kubelet[3904]: E0717 19:01:09.878648    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	Jul 17 19:01:21 embed-certs-527415 kubelet[3904]: E0717 19:01:21.880288    3904 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-hvxtg" podUID="05a18f70-4284-4315-892e-2850ac8b5050"
	
	
	==> storage-provisioner [084cde7459c3484dc827fb95bcba3e12f9f645203aaf24df4adca837533190a1] <==
	I0717 18:46:17.767514       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:46:17.775947       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:46:17.776081       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:46:17.786748       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:46:17.786938       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-527415_4de59122-21e5-46d9-ba94-4da6fa5d9bed!
	I0717 18:46:17.796848       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a8197d73-dd9b-4f7a-a828-578a98fc0b06", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-527415_4de59122-21e5-46d9-ba94-4da6fa5d9bed became leader
	I0717 18:46:17.887494       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-527415_4de59122-21e5-46d9-ba94-4da6fa5d9bed!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-527415 -n embed-certs-527415
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-527415 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-hvxtg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-527415 describe pod metrics-server-569cc877fc-hvxtg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-527415 describe pod metrics-server-569cc877fc-hvxtg: exit status 1 (60.950687ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-hvxtg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-527415 describe pod metrics-server-569cc877fc-hvxtg: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (360.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (185.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:57:59.762783   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:58:21.395132   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 18:59:26.523226   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 19:00:12.089566   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
E0717 19:00:19.279903   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.128:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.128:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019549 -n old-k8s-version-019549
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019549 -n old-k8s-version-019549: exit status 2 (217.417702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-019549" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-019549 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-019549 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.906µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-019549 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549: exit status 2 (218.526385ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-019549 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-019549 logs -n 25: (1.574015497s)
E0717 19:00:41.791929   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl status crio                             |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo systemctl cat crio                                |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo find /etc/crio -type f                            |                              |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                          |                              |         |         |                     |                     |
	|         | \;                                                     |                              |         |         |                     |                     |
	| ssh     | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	|         | sudo crio config                                       |                              |         |         |                     |                     |
	| delete  | -p enable-default-cni-235476                           | enable-default-cni-235476    | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:31 UTC |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:31 UTC | 17 Jul 24 18:32 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-527415            | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p pause-371172                                        | pause-371172                 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	| delete  | -p                                                     | disable-driver-mounts-341716 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:32 UTC |
	|         | disable-driver-mounts-341716                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:32 UTC | 17 Jul 24 18:34 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-066175             | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC | 17 Jul 24 18:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-066175                                   | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-022930  | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC | 17 Jul 24 18:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-527415                 | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:34 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-019549        | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-527415                                  | embed-certs-527415           | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:46 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-066175                  | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-066175 --memory=2200                     | no-preload-066175            | jenkins | v1.33.1 | 17 Jul 24 18:35 UTC | 17 Jul 24 18:45 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-019549             | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC | 17 Jul 24 18:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-019549                              | old-k8s-version-019549       | jenkins | v1.33.1 | 17 Jul 24 18:36 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-022930       | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-022930 | jenkins | v1.33.1 | 17 Jul 24 18:37 UTC | 17 Jul 24 18:45 UTC |
	|         | default-k8s-diff-port-022930                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 18:37:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:37:14.473404   81068 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:37:14.473526   81068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:14.473535   81068 out.go:304] Setting ErrFile to fd 2...
	I0717 18:37:14.473540   81068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:37:14.473714   81068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:37:14.474251   81068 out.go:298] Setting JSON to false
	I0717 18:37:14.475115   81068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8377,"bootTime":1721233057,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:37:14.475172   81068 start.go:139] virtualization: kvm guest
	I0717 18:37:14.477356   81068 out.go:177] * [default-k8s-diff-port-022930] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:37:14.478600   81068 notify.go:220] Checking for updates...
	I0717 18:37:14.478615   81068 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:37:14.480094   81068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:37:14.481516   81068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:37:14.482886   81068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:37:14.484159   81068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:37:14.485449   81068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:37:14.487164   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:37:14.487744   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:14.487795   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:14.502368   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0717 18:37:14.502712   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:14.503192   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:37:14.503213   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:14.503574   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:14.503778   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:37:14.504032   81068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:37:14.504326   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:37:14.504381   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:37:14.518330   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0717 18:37:14.518718   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:37:14.519095   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:37:14.519114   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:37:14.519409   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:37:14.519578   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:37:14.549923   81068 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 18:37:14.551160   81068 start.go:297] selected driver: kvm2
	I0717 18:37:14.551175   81068 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:37:14.551302   81068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:37:14.551931   81068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:37:14.552008   81068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 18:37:14.566038   81068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 18:37:14.566371   81068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:37:14.566443   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:37:14.566466   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:37:14.566516   81068 start.go:340] cluster config:
	{Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:37:14.566643   81068 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:37:14.568602   81068 out.go:177] * Starting "default-k8s-diff-port-022930" primary control-plane node in "default-k8s-diff-port-022930" cluster
	I0717 18:37:13.057187   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:16.129274   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:14.569868   81068 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:37:14.569908   81068 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 18:37:14.569919   81068 cache.go:56] Caching tarball of preloaded images
	I0717 18:37:14.569992   81068 preload.go:172] Found /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:37:14.570003   81068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on crio
	I0717 18:37:14.570100   81068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/config.json ...
	I0717 18:37:14.570277   81068 start.go:360] acquireMachinesLock for default-k8s-diff-port-022930: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:37:22.209207   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:25.281226   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:31.361221   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:34.433258   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:40.513234   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:43.585225   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:49.665198   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:52.737256   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:37:58.817201   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:01.889213   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:07.969247   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:11.041264   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:17.121227   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:20.193250   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:26.273206   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:29.345193   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:35.425259   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:38.497261   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:44.577185   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:47.649306   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:53.729234   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:38:56.801257   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:02.881239   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:05.953258   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:12.033251   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:15.105230   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:21.185200   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:24.257195   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:30.337181   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:33.409224   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:39.489219   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:42.561250   80180 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.90:22: connect: no route to host
	I0717 18:39:45.565739   80401 start.go:364] duration metric: took 4m11.345351864s to acquireMachinesLock for "no-preload-066175"
	I0717 18:39:45.565801   80401 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:39:45.565807   80401 fix.go:54] fixHost starting: 
	I0717 18:39:45.566167   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:39:45.566198   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:39:45.580996   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45665
	I0717 18:39:45.581389   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:39:45.581797   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:39:45.581817   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:39:45.582145   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:39:45.582323   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:39:45.582467   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:39:45.584074   80401 fix.go:112] recreateIfNeeded on no-preload-066175: state=Stopped err=<nil>
	I0717 18:39:45.584109   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	W0717 18:39:45.584260   80401 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:39:45.586842   80401 out.go:177] * Restarting existing kvm2 VM for "no-preload-066175" ...
	I0717 18:39:45.563046   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:39:45.563105   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:39:45.563521   80180 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:39:45.563555   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:39:45.563758   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:39:45.565594   80180 machine.go:97] duration metric: took 4m37.427146226s to provisionDockerMachine
	I0717 18:39:45.565643   80180 fix.go:56] duration metric: took 4m37.448013968s for fixHost
	I0717 18:39:45.565651   80180 start.go:83] releasing machines lock for "embed-certs-527415", held for 4m37.448033785s
	W0717 18:39:45.565675   80180 start.go:714] error starting host: provision: host is not running
	W0717 18:39:45.565775   80180 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0717 18:39:45.565784   80180 start.go:729] Will try again in 5 seconds ...
	I0717 18:39:45.587901   80401 main.go:141] libmachine: (no-preload-066175) Calling .Start
	I0717 18:39:45.588046   80401 main.go:141] libmachine: (no-preload-066175) Ensuring networks are active...
	I0717 18:39:45.588666   80401 main.go:141] libmachine: (no-preload-066175) Ensuring network default is active
	I0717 18:39:45.589012   80401 main.go:141] libmachine: (no-preload-066175) Ensuring network mk-no-preload-066175 is active
	I0717 18:39:45.589386   80401 main.go:141] libmachine: (no-preload-066175) Getting domain xml...
	I0717 18:39:45.589959   80401 main.go:141] libmachine: (no-preload-066175) Creating domain...
	I0717 18:39:46.785717   80401 main.go:141] libmachine: (no-preload-066175) Waiting to get IP...
	I0717 18:39:46.786495   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:46.786912   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:46.786974   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:46.786888   81612 retry.go:31] will retry after 301.458026ms: waiting for machine to come up
	I0717 18:39:47.090556   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.091129   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.091154   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.091098   81612 retry.go:31] will retry after 347.107185ms: waiting for machine to come up
	I0717 18:39:47.439530   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.440010   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.440033   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.439947   81612 retry.go:31] will retry after 436.981893ms: waiting for machine to come up
	I0717 18:39:47.878684   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:47.879091   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:47.879120   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:47.879051   81612 retry.go:31] will retry after 582.942833ms: waiting for machine to come up
	I0717 18:39:48.464068   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:48.464568   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:48.464593   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:48.464513   81612 retry.go:31] will retry after 633.101908ms: waiting for machine to come up
	I0717 18:39:49.099383   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:49.099762   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:49.099784   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:49.099720   81612 retry.go:31] will retry after 847.181679ms: waiting for machine to come up
	I0717 18:39:50.567294   80180 start.go:360] acquireMachinesLock for embed-certs-527415: {Name:mk9777099aa5222699e7eb1a959d6393d4c7cab9 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0717 18:39:49.948696   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:49.949228   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:49.949260   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:49.949188   81612 retry.go:31] will retry after 1.048891217s: waiting for machine to come up
	I0717 18:39:50.999658   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:51.000062   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:51.000099   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:51.000001   81612 retry.go:31] will retry after 942.285454ms: waiting for machine to come up
	I0717 18:39:51.944171   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:51.944676   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:51.944702   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:51.944632   81612 retry.go:31] will retry after 1.21768861s: waiting for machine to come up
	I0717 18:39:53.163883   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:53.164345   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:53.164368   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:53.164305   81612 retry.go:31] will retry after 1.505905193s: waiting for machine to come up
	I0717 18:39:54.671532   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:54.671951   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:54.671977   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:54.671918   81612 retry.go:31] will retry after 2.885547597s: waiting for machine to come up
	I0717 18:39:57.560375   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:39:57.560878   80401 main.go:141] libmachine: (no-preload-066175) DBG | unable to find current IP address of domain no-preload-066175 in network mk-no-preload-066175
	I0717 18:39:57.560902   80401 main.go:141] libmachine: (no-preload-066175) DBG | I0717 18:39:57.560830   81612 retry.go:31] will retry after 3.53251124s: waiting for machine to come up
	I0717 18:40:02.249487   80857 start.go:364] duration metric: took 3m17.095542929s to acquireMachinesLock for "old-k8s-version-019549"
	I0717 18:40:02.249548   80857 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:02.249556   80857 fix.go:54] fixHost starting: 
	I0717 18:40:02.249946   80857 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:02.249976   80857 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:02.269365   80857 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45465
	I0717 18:40:02.269715   80857 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:02.270182   80857 main.go:141] libmachine: Using API Version  1
	I0717 18:40:02.270205   80857 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:02.270534   80857 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:02.270738   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:02.270875   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetState
	I0717 18:40:02.272408   80857 fix.go:112] recreateIfNeeded on old-k8s-version-019549: state=Stopped err=<nil>
	I0717 18:40:02.272443   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	W0717 18:40:02.272597   80857 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:02.274702   80857 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-019549" ...
	I0717 18:40:01.094975   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.095556   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has current primary IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.095579   80401 main.go:141] libmachine: (no-preload-066175) Found IP for machine: 192.168.72.216
	I0717 18:40:01.095592   80401 main.go:141] libmachine: (no-preload-066175) Reserving static IP address...
	I0717 18:40:01.095955   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.095980   80401 main.go:141] libmachine: (no-preload-066175) DBG | skip adding static IP to network mk-no-preload-066175 - found existing host DHCP lease matching {name: "no-preload-066175", mac: "52:54:00:72:a5:17", ip: "192.168.72.216"}
	I0717 18:40:01.095989   80401 main.go:141] libmachine: (no-preload-066175) Reserved static IP address: 192.168.72.216
	I0717 18:40:01.096000   80401 main.go:141] libmachine: (no-preload-066175) Waiting for SSH to be available...
	I0717 18:40:01.096010   80401 main.go:141] libmachine: (no-preload-066175) DBG | Getting to WaitForSSH function...
	I0717 18:40:01.098163   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.098498   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.098521   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.098631   80401 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH client type: external
	I0717 18:40:01.098657   80401 main.go:141] libmachine: (no-preload-066175) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa (-rw-------)
	I0717 18:40:01.098692   80401 main.go:141] libmachine: (no-preload-066175) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:01.098707   80401 main.go:141] libmachine: (no-preload-066175) DBG | About to run SSH command:
	I0717 18:40:01.098720   80401 main.go:141] libmachine: (no-preload-066175) DBG | exit 0
	I0717 18:40:01.216740   80401 main.go:141] libmachine: (no-preload-066175) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:01.217099   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetConfigRaw
	I0717 18:40:01.217706   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:01.220160   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.220461   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.220492   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.220656   80401 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/config.json ...
	I0717 18:40:01.220843   80401 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:01.220860   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:01.221067   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.223044   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.223347   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.223371   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.223531   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.223719   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.223864   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.223980   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.224125   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.224332   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.224345   80401 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:01.321053   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:01.321083   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.321333   80401 buildroot.go:166] provisioning hostname "no-preload-066175"
	I0717 18:40:01.321359   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.321529   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.323945   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.324269   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.324297   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.324421   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.324582   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.324724   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.324837   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.324996   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.325162   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.325175   80401 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-066175 && echo "no-preload-066175" | sudo tee /etc/hostname
	I0717 18:40:01.435003   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-066175
	
	I0717 18:40:01.435033   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.437795   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.438113   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.438155   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.438344   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.438533   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.438692   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.438803   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.438948   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.439094   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.439108   80401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-066175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-066175/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-066175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:01.540598   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:01.540631   80401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:01.540650   80401 buildroot.go:174] setting up certificates
	I0717 18:40:01.540660   80401 provision.go:84] configureAuth start
	I0717 18:40:01.540669   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetMachineName
	I0717 18:40:01.540977   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:01.543503   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.543788   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.543817   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.543907   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.545954   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.546261   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.546280   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.546415   80401 provision.go:143] copyHostCerts
	I0717 18:40:01.546483   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:01.546498   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:01.546596   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:01.546730   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:01.546743   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:01.546788   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:01.546878   80401 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:01.546888   80401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:01.546921   80401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:01.547054   80401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.no-preload-066175 san=[127.0.0.1 192.168.72.216 localhost minikube no-preload-066175]
	I0717 18:40:01.628522   80401 provision.go:177] copyRemoteCerts
	I0717 18:40:01.628574   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:01.628596   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.631306   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.631714   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.631761   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.631876   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.632050   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.632210   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.632330   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:01.711344   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:01.738565   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 18:40:01.765888   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:40:01.790852   80401 provision.go:87] duration metric: took 250.181586ms to configureAuth
	I0717 18:40:01.790874   80401 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:01.791046   80401 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:40:01.791111   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:01.793530   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.793922   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:01.793945   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:01.794095   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:01.794323   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.794497   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:01.794635   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:01.794786   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:01.794955   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:01.794969   80401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:02.032506   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:02.032543   80401 machine.go:97] duration metric: took 811.687511ms to provisionDockerMachine
	I0717 18:40:02.032554   80401 start.go:293] postStartSetup for "no-preload-066175" (driver="kvm2")
	I0717 18:40:02.032567   80401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:02.032596   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.032921   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:02.032966   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.035429   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.035731   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.035767   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.035921   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.036081   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.036351   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.036493   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.114601   80401 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:02.118230   80401 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:02.118247   80401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:02.118308   80401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:02.118384   80401 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:02.118592   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:02.126753   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:02.148028   80401 start.go:296] duration metric: took 115.461293ms for postStartSetup
	I0717 18:40:02.148066   80401 fix.go:56] duration metric: took 16.582258787s for fixHost
	I0717 18:40:02.148084   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.150550   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.150917   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.150949   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.151061   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.151242   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.151394   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.151513   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.151658   80401 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:02.151828   80401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.72.216 22 <nil> <nil>}
	I0717 18:40:02.151841   80401 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:02.249303   80401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241602.223072082
	
	I0717 18:40:02.249334   80401 fix.go:216] guest clock: 1721241602.223072082
	I0717 18:40:02.249344   80401 fix.go:229] Guest: 2024-07-17 18:40:02.223072082 +0000 UTC Remote: 2024-07-17 18:40:02.14806999 +0000 UTC m=+268.060359078 (delta=75.002092ms)
	I0717 18:40:02.249388   80401 fix.go:200] guest clock delta is within tolerance: 75.002092ms
	I0717 18:40:02.249396   80401 start.go:83] releasing machines lock for "no-preload-066175", held for 16.683615057s
	I0717 18:40:02.249442   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.249735   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:02.252545   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.252896   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.252929   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.253053   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253516   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253700   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:40:02.253770   80401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:02.253803   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.253913   80401 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:02.253937   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:40:02.256152   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256462   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.256501   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256558   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.256616   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.256718   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.256879   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.257013   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:02.257021   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.257038   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:02.257158   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:40:02.257312   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:40:02.257469   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:40:02.257604   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:40:02.376103   80401 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:02.381639   80401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:02.529357   80401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:02.536396   80401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:02.536463   80401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:02.555045   80401 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:02.555067   80401 start.go:495] detecting cgroup driver to use...
	I0717 18:40:02.555130   80401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:02.570540   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:02.583804   80401 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:02.583867   80401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:02.596657   80401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:02.610371   80401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:02.717489   80401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:02.875146   80401 docker.go:233] disabling docker service ...
	I0717 18:40:02.875235   80401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:02.895657   80401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:02.908366   80401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:03.018375   80401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:03.143922   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:03.160599   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:03.180643   80401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0717 18:40:03.180709   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.190040   80401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:03.190097   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.199275   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.208647   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.217750   80401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:03.226808   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.235779   80401 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.251451   80401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:03.261476   80401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:03.269978   80401 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:03.270028   80401 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:03.280901   80401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:03.290184   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:03.409167   80401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:03.541153   80401 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:03.541218   80401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:03.546012   80401 start.go:563] Will wait 60s for crictl version
	I0717 18:40:03.546059   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:03.549567   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:03.588396   80401 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:03.588467   80401 ssh_runner.go:195] Run: crio --version
	I0717 18:40:03.622472   80401 ssh_runner.go:195] Run: crio --version
	I0717 18:40:03.652180   80401 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0717 18:40:03.653613   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetIP
	I0717 18:40:03.656560   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:03.656959   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:40:03.656987   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:40:03.657222   80401 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:03.661102   80401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:03.673078   80401 kubeadm.go:883] updating cluster {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:03.673212   80401 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 18:40:03.673248   80401 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:03.703959   80401 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0717 18:40:03.703986   80401 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:40:03.704042   80401 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:03.704078   80401 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.704095   80401 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.704114   80401 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.704150   80401 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.704077   80401 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.704168   80401 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0717 18:40:03.704243   80401 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.705787   80401 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:03.705795   80401 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.705801   80401 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.705787   80401 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.705792   80401 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.705816   80401 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.705829   80401 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0717 18:40:03.706094   80401 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.925413   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.930827   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0717 18:40:03.963901   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:03.964215   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:03.966162   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:03.970852   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:03.973664   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:03.997849   80401 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0717 18:40:03.997912   80401 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:03.997969   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118851   80401 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0717 18:40:04.118888   80401 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:04.118892   80401 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0717 18:40:04.118924   80401 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:04.118934   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118943   80401 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0717 18:40:04.118969   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.118969   80401 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:04.119001   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.119027   80401 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0717 18:40:04.119058   80401 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:04.119089   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0717 18:40:04.119104   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:04.119065   80401 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0717 18:40:04.119136   80401 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:04.119159   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:02.275985   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .Start
	I0717 18:40:02.276143   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring networks are active...
	I0717 18:40:02.276898   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network default is active
	I0717 18:40:02.277333   80857 main.go:141] libmachine: (old-k8s-version-019549) Ensuring network mk-old-k8s-version-019549 is active
	I0717 18:40:02.277796   80857 main.go:141] libmachine: (old-k8s-version-019549) Getting domain xml...
	I0717 18:40:02.278481   80857 main.go:141] libmachine: (old-k8s-version-019549) Creating domain...
	I0717 18:40:03.571325   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting to get IP...
	I0717 18:40:03.572359   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.572836   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.572968   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.572816   81751 retry.go:31] will retry after 301.991284ms: waiting for machine to come up
	I0717 18:40:03.876263   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:03.876688   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:03.876715   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:03.876637   81751 retry.go:31] will retry after 286.461163ms: waiting for machine to come up
	I0717 18:40:04.165366   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.165873   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.165902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.165811   81751 retry.go:31] will retry after 383.479108ms: waiting for machine to come up
	I0717 18:40:04.551152   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.551615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.551650   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.551589   81751 retry.go:31] will retry after 429.076714ms: waiting for machine to come up
	I0717 18:40:04.982157   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:04.982517   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:04.982545   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:04.982470   81751 retry.go:31] will retry after 553.684035ms: waiting for machine to come up
	I0717 18:40:04.122952   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0717 18:40:04.130590   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0717 18:40:04.130741   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0717 18:40:04.200609   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0717 18:40:04.200631   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0717 18:40:04.200643   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0717 18:40:04.200728   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:04.200741   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.200815   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.212034   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0717 18:40:04.212057   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0717 18:40:04.212113   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:04.212123   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:04.259447   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:04.259525   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0717 18:40:04.259548   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.259552   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:04.259553   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0717 18:40:04.259534   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:04.259588   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0717 18:40:04.259591   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0717 18:40:04.259628   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0717 18:40:04.259639   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:04.550060   80401 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:06.236639   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.976976668s)
	I0717 18:40:06.236683   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0717 18:40:06.236691   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.97711629s)
	I0717 18:40:06.236718   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0717 18:40:06.236732   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.977125153s)
	I0717 18:40:06.236752   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0717 18:40:06.236776   80401 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:06.236854   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0717 18:40:06.236781   80401 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.68669473s)
	I0717 18:40:06.236908   80401 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 18:40:06.236951   80401 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:06.236994   80401 ssh_runner.go:195] Run: which crictl
	I0717 18:40:08.107122   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.870244887s)
	I0717 18:40:08.107152   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0717 18:40:08.107175   80401 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:08.107203   80401 ssh_runner.go:235] Completed: which crictl: (1.870188554s)
	I0717 18:40:08.107224   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0717 18:40:08.107261   80401 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:08.146817   80401 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 18:40:08.146932   80401 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:05.538229   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:05.538753   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:05.538777   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:05.538702   81751 retry.go:31] will retry after 747.130907ms: waiting for machine to come up
	I0717 18:40:06.287146   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:06.287626   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:06.287665   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:06.287581   81751 retry.go:31] will retry after 1.171580264s: waiting for machine to come up
	I0717 18:40:07.461393   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:07.462015   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:07.462046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:07.461963   81751 retry.go:31] will retry after 1.199265198s: waiting for machine to come up
	I0717 18:40:08.663340   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:08.663789   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:08.663815   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:08.663745   81751 retry.go:31] will retry after 1.621895351s: waiting for machine to come up
	I0717 18:40:11.404193   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.296944718s)
	I0717 18:40:11.404228   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0717 18:40:11.404248   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:11.404245   80401 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.257289666s)
	I0717 18:40:11.404272   80401 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0717 18:40:11.404294   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0717 18:40:13.370389   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.966067238s)
	I0717 18:40:13.370426   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0717 18:40:13.370455   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:13.370505   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0717 18:40:10.287596   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:10.288019   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:10.288046   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:10.287964   81751 retry.go:31] will retry after 1.748504204s: waiting for machine to come up
	I0717 18:40:12.038137   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:12.038582   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:12.038615   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:12.038532   81751 retry.go:31] will retry after 2.477996004s: waiting for machine to come up
	I0717 18:40:14.517788   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:14.518175   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | unable to find current IP address of domain old-k8s-version-019549 in network mk-old-k8s-version-019549
	I0717 18:40:14.518203   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | I0717 18:40:14.518123   81751 retry.go:31] will retry after 3.29313184s: waiting for machine to come up
	I0717 18:40:19.093608   81068 start.go:364] duration metric: took 3m4.523289209s to acquireMachinesLock for "default-k8s-diff-port-022930"
	I0717 18:40:19.093694   81068 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:19.093705   81068 fix.go:54] fixHost starting: 
	I0717 18:40:19.094122   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:19.094157   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:19.113793   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0717 18:40:19.114236   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:19.114755   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:40:19.114775   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:19.115110   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:19.115294   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:19.115434   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:40:19.117072   81068 fix.go:112] recreateIfNeeded on default-k8s-diff-port-022930: state=Stopped err=<nil>
	I0717 18:40:19.117109   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	W0717 18:40:19.117256   81068 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:19.120986   81068 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-022930" ...
	I0717 18:40:15.214734   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.844202729s)
	I0717 18:40:15.214756   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0717 18:40:15.214777   80401 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:15.214814   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0717 18:40:17.066570   80401 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.851726063s)
	I0717 18:40:17.066604   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0717 18:40:17.066629   80401 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:17.066679   80401 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 18:40:17.703556   80401 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 18:40:17.703614   80401 cache_images.go:123] Successfully loaded all cached images
	I0717 18:40:17.703624   80401 cache_images.go:92] duration metric: took 13.999623105s to LoadCachedImages
	I0717 18:40:17.703638   80401 kubeadm.go:934] updating node { 192.168.72.216 8443 v1.31.0-beta.0 crio true true} ...
	I0717 18:40:17.703754   80401 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-066175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:17.703830   80401 ssh_runner.go:195] Run: crio config
	I0717 18:40:17.753110   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:40:17.753138   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:17.753159   80401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:17.753190   80401 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.216 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-066175 NodeName:no-preload-066175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:40:17.753404   80401 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-066175"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:17.753492   80401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0717 18:40:17.763417   80401 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:17.763491   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:17.772139   80401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0717 18:40:17.786982   80401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0717 18:40:17.801327   80401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0717 18:40:17.816796   80401 ssh_runner.go:195] Run: grep 192.168.72.216	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:17.820354   80401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:17.834155   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:17.970222   80401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:17.989953   80401 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175 for IP: 192.168.72.216
	I0717 18:40:17.989977   80401 certs.go:194] generating shared ca certs ...
	I0717 18:40:17.989998   80401 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:17.990160   80401 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:17.990217   80401 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:17.990231   80401 certs.go:256] generating profile certs ...
	I0717 18:40:17.990365   80401 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/client.key
	I0717 18:40:17.990460   80401 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key.78182672
	I0717 18:40:17.990509   80401 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key
	I0717 18:40:17.990679   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:17.990723   80401 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:17.990740   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:17.990772   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:17.990813   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:17.990846   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:17.990905   80401 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:17.991590   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:18.035349   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:18.079539   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:18.110382   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:18.135920   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 18:40:18.168675   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:18.196132   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:18.230418   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/no-preload-066175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:40:18.254319   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:18.277293   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:18.301416   80401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:18.330021   80401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:18.348803   80401 ssh_runner.go:195] Run: openssl version
	I0717 18:40:18.355126   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:18.366004   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.370221   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.370287   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:18.375799   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:18.385991   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:18.396141   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.400451   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.400526   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:18.406203   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:18.419059   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:18.429450   80401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.433742   80401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.433794   80401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:18.439261   80401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:18.450327   80401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:18.454734   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:18.460256   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:18.465766   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:18.471349   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:18.476780   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:18.482509   80401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:18.488138   80401 kubeadm.go:392] StartCluster: {Name:no-preload-066175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-066175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:18.488229   80401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:18.488270   80401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:18.532219   80401 cri.go:89] found id: ""
	I0717 18:40:18.532318   80401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:18.542632   80401 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:18.542655   80401 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:18.542699   80401 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:18.552352   80401 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:18.553351   80401 kubeconfig.go:125] found "no-preload-066175" server: "https://192.168.72.216:8443"
	I0717 18:40:18.555295   80401 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:18.565857   80401 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.216
	I0717 18:40:18.565892   80401 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:18.565905   80401 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:18.565958   80401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:18.605512   80401 cri.go:89] found id: ""
	I0717 18:40:18.605593   80401 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:18.622235   80401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:18.633175   80401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:18.633196   80401 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:18.633241   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:40:18.641969   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:18.642023   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:18.651017   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:40:18.659619   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:18.659667   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:18.668008   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:40:18.675985   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:18.676037   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:18.685937   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:40:18.695574   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:18.695624   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:18.706040   80401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:18.717397   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:18.836009   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:19.122366   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Start
	I0717 18:40:19.122530   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring networks are active...
	I0717 18:40:19.123330   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring network default is active
	I0717 18:40:19.123832   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Ensuring network mk-default-k8s-diff-port-022930 is active
	I0717 18:40:19.124268   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Getting domain xml...
	I0717 18:40:19.124922   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Creating domain...
	I0717 18:40:17.813673   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814213   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has current primary IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.814242   80857 main.go:141] libmachine: (old-k8s-version-019549) Found IP for machine: 192.168.39.128
	I0717 18:40:17.814277   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserving static IP address...
	I0717 18:40:17.814720   80857 main.go:141] libmachine: (old-k8s-version-019549) Reserved static IP address: 192.168.39.128
	I0717 18:40:17.814738   80857 main.go:141] libmachine: (old-k8s-version-019549) Waiting for SSH to be available...
	I0717 18:40:17.814762   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.814783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | skip adding static IP to network mk-old-k8s-version-019549 - found existing host DHCP lease matching {name: "old-k8s-version-019549", mac: "52:54:00:60:f7:87", ip: "192.168.39.128"}
	I0717 18:40:17.814796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Getting to WaitForSSH function...
	I0717 18:40:17.817314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817714   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.817743   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.817917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH client type: external
	I0717 18:40:17.817944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa (-rw-------)
	I0717 18:40:17.817971   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:17.817984   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | About to run SSH command:
	I0717 18:40:17.818000   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | exit 0
	I0717 18:40:17.945902   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:17.946262   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetConfigRaw
	I0717 18:40:17.946907   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:17.949757   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950158   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.950178   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.950474   80857 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/config.json ...
	I0717 18:40:17.950706   80857 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:17.950728   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:17.950941   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:17.953738   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954141   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:17.954184   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:17.954282   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:17.954456   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954617   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:17.954790   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:17.954957   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:17.955121   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:17.955131   80857 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:18.061082   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:18.061113   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061405   80857 buildroot.go:166] provisioning hostname "old-k8s-version-019549"
	I0717 18:40:18.061432   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.061685   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.064855   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065314   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.065348   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.065537   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.065777   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.065929   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.066118   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.066329   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.066547   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.066564   80857 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-019549 && echo "old-k8s-version-019549" | sudo tee /etc/hostname
	I0717 18:40:18.191467   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-019549
	
	I0717 18:40:18.191517   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.194917   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195455   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.195502   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.195714   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.195908   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196105   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.196288   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.196483   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.196708   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.196731   80857 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-019549' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-019549/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-019549' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:18.315020   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:18.315047   80857 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:18.315065   80857 buildroot.go:174] setting up certificates
	I0717 18:40:18.315078   80857 provision.go:84] configureAuth start
	I0717 18:40:18.315090   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetMachineName
	I0717 18:40:18.315358   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:18.318342   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.318796   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.318826   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.319078   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.321562   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.321914   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.321944   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.322125   80857 provision.go:143] copyHostCerts
	I0717 18:40:18.322208   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:18.322226   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:18.322309   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:18.322443   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:18.322457   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:18.322492   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:18.322579   80857 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:18.322591   80857 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:18.322621   80857 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:18.322727   80857 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-019549 san=[127.0.0.1 192.168.39.128 localhost minikube old-k8s-version-019549]
	I0717 18:40:18.397216   80857 provision.go:177] copyRemoteCerts
	I0717 18:40:18.397266   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:18.397301   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.399887   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400237   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.400286   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.400531   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.400732   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.400880   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.401017   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.490677   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:18.518392   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 18:40:18.543930   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:18.567339   80857 provision.go:87] duration metric: took 252.250106ms to configureAuth
	I0717 18:40:18.567360   80857 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:18.567539   80857 config.go:182] Loaded profile config "old-k8s-version-019549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0717 18:40:18.567610   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.570373   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570783   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.570809   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.570943   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.571140   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571281   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.571451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.571624   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.571841   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.571862   80857 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:18.845725   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:18.845752   80857 machine.go:97] duration metric: took 895.03234ms to provisionDockerMachine
	I0717 18:40:18.845765   80857 start.go:293] postStartSetup for "old-k8s-version-019549" (driver="kvm2")
	I0717 18:40:18.845778   80857 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:18.845828   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:18.846158   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:18.846192   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.848760   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849264   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.849293   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.849451   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.849649   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.849843   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.850007   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:18.938026   80857 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:18.943223   80857 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:18.943254   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:18.943317   80857 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:18.943417   80857 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:18.943509   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:18.954887   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:18.976980   80857 start.go:296] duration metric: took 131.200877ms for postStartSetup
	I0717 18:40:18.977022   80857 fix.go:56] duration metric: took 16.727466541s for fixHost
	I0717 18:40:18.977041   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:18.980020   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980384   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:18.980417   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:18.980533   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:18.980723   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.980903   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:18.981059   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:18.981207   80857 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:18.981406   80857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.39.128 22 <nil> <nil>}
	I0717 18:40:18.981418   80857 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:19.093409   80857 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241619.063415252
	
	I0717 18:40:19.093433   80857 fix.go:216] guest clock: 1721241619.063415252
	I0717 18:40:19.093443   80857 fix.go:229] Guest: 2024-07-17 18:40:19.063415252 +0000 UTC Remote: 2024-07-17 18:40:18.97702579 +0000 UTC m=+213.960604949 (delta=86.389462ms)
	I0717 18:40:19.093494   80857 fix.go:200] guest clock delta is within tolerance: 86.389462ms
	I0717 18:40:19.093506   80857 start.go:83] releasing machines lock for "old-k8s-version-019549", held for 16.843984035s
	I0717 18:40:19.093543   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.093842   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:19.096443   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.096817   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.096848   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.097035   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097579   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097769   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .DriverName
	I0717 18:40:19.097859   80857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:19.097915   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.098007   80857 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:19.098031   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHHostname
	I0717 18:40:19.100775   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101108   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101160   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101185   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101412   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101595   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.101606   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:19.101637   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:19.101718   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.101789   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHPort
	I0717 18:40:19.101853   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.101975   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHKeyPath
	I0717 18:40:19.102092   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetSSHUsername
	I0717 18:40:19.102212   80857 sshutil.go:53] new ssh client: &{IP:192.168.39.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/old-k8s-version-019549/id_rsa Username:docker}
	I0717 18:40:19.218596   80857 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:19.225675   80857 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:19.371453   80857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:19.381365   80857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:19.381438   80857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:19.397504   80857 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:19.397530   80857 start.go:495] detecting cgroup driver to use...
	I0717 18:40:19.397597   80857 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:19.412150   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:19.425495   80857 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:19.425578   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:19.438662   80857 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:19.451953   80857 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:19.578702   80857 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:19.733328   80857 docker.go:233] disabling docker service ...
	I0717 18:40:19.733411   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:19.753615   80857 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:19.774057   80857 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:19.933901   80857 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:20.049914   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:20.063500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:20.082560   80857 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 18:40:20.082611   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.092857   80857 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:20.092912   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.103283   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.112612   80857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:20.122671   80857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:20.132892   80857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:20.145445   80857 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:20.145501   80857 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:20.158958   80857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:20.168377   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:20.307224   80857 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:20.453407   80857 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:20.453490   80857 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:20.458007   80857 start.go:563] Will wait 60s for crictl version
	I0717 18:40:20.458062   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:20.461420   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:20.507358   80857 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:20.507426   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.542812   80857 ssh_runner.go:195] Run: crio --version
	I0717 18:40:20.577280   80857 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0717 18:40:20.432028   80401 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.59597321s)
	I0717 18:40:20.432063   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.633854   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.728474   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:20.879989   80401 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:20.880079   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.380421   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.880208   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:21.912390   80401 api_server.go:72] duration metric: took 1.032400417s to wait for apiserver process to appear ...
	I0717 18:40:21.912419   80401 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:40:21.912443   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:21.912904   80401 api_server.go:269] stopped: https://192.168.72.216:8443/healthz: Get "https://192.168.72.216:8443/healthz": dial tcp 192.168.72.216:8443: connect: connection refused
	I0717 18:40:22.412598   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:20.397025   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting to get IP...
	I0717 18:40:20.398122   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.398525   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.398610   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.398506   81910 retry.go:31] will retry after 285.646022ms: waiting for machine to come up
	I0717 18:40:20.686556   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.687151   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.687263   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.687202   81910 retry.go:31] will retry after 239.996ms: waiting for machine to come up
	I0717 18:40:20.928604   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.929111   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:20.929139   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:20.929057   81910 retry.go:31] will retry after 487.674422ms: waiting for machine to come up
	I0717 18:40:21.418475   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.418928   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.418952   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:21.418872   81910 retry.go:31] will retry after 439.363216ms: waiting for machine to come up
	I0717 18:40:21.859546   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.860241   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:21.860273   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:21.860145   81910 retry.go:31] will retry after 598.922134ms: waiting for machine to come up
	I0717 18:40:22.461026   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:22.461509   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:22.461542   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:22.461457   81910 retry.go:31] will retry after 908.602286ms: waiting for machine to come up
	I0717 18:40:23.371582   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:23.372143   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:23.372170   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:23.372093   81910 retry.go:31] will retry after 893.690966ms: waiting for machine to come up
	I0717 18:40:24.267377   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:24.267908   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:24.267935   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:24.267873   81910 retry.go:31] will retry after 1.468061022s: waiting for machine to come up
	I0717 18:40:20.578679   80857 main.go:141] libmachine: (old-k8s-version-019549) Calling .GetIP
	I0717 18:40:20.581569   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.581933   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:f7:87", ip: ""} in network mk-old-k8s-version-019549: {Iface:virbr1 ExpiryTime:2024-07-17 19:40:12 +0000 UTC Type:0 Mac:52:54:00:60:f7:87 Iaid: IPaddr:192.168.39.128 Prefix:24 Hostname:old-k8s-version-019549 Clientid:01:52:54:00:60:f7:87}
	I0717 18:40:20.581961   80857 main.go:141] libmachine: (old-k8s-version-019549) DBG | domain old-k8s-version-019549 has defined IP address 192.168.39.128 and MAC address 52:54:00:60:f7:87 in network mk-old-k8s-version-019549
	I0717 18:40:20.582197   80857 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:20.586047   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:20.598137   80857 kubeadm.go:883] updating cluster {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:20.598284   80857 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 18:40:20.598355   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:20.646681   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:20.646757   80857 ssh_runner.go:195] Run: which lz4
	I0717 18:40:20.650691   80857 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:20.654703   80857 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:20.654730   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0717 18:40:22.163706   80857 crio.go:462] duration metric: took 1.513040695s to copy over tarball
	I0717 18:40:22.163783   80857 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:40:24.904256   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:24.904292   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:24.904308   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:24.971088   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:24.971120   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:24.971136   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.015832   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.015868   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:25.413309   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.418927   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.418955   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:25.913026   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:25.917375   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:25.917407   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:26.412566   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:26.419115   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:26.419140   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:26.912680   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:26.920245   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:26.920268   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:27.412854   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:27.417356   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:27.417390   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:27.912883   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:27.918242   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:27.918274   80401 api_server.go:103] status: https://192.168.72.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:28.412591   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:40:28.419257   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:40:28.427814   80401 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:40:28.427842   80401 api_server.go:131] duration metric: took 6.515416451s to wait for apiserver health ...
	I0717 18:40:28.427854   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:40:28.427863   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:28.429828   80401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:40:28.431012   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:40:28.444822   80401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:40:28.465212   80401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:40:28.477639   80401 system_pods.go:59] 8 kube-system pods found
	I0717 18:40:28.477691   80401 system_pods.go:61] "coredns-5cfdc65f69-spj2w" [6849b651-9346-4d96-97a7-88eca7bbd50a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:40:28.477706   80401 system_pods.go:61] "etcd-no-preload-066175" [be012488-220b-421d-bf16-a3623fafb8fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:40:28.477721   80401 system_pods.go:61] "kube-apiserver-no-preload-066175" [4292a786-61f3-405d-8784-ec8a58e1b124] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:40:28.477731   80401 system_pods.go:61] "kube-controller-manager-no-preload-066175" [937a48f4-7fca-4cee-bb50-51f1720960da] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:40:28.477739   80401 system_pods.go:61] "kube-proxy-tn5xn" [f0a910b3-98b6-470f-a5a2-e49369ecb733] Running
	I0717 18:40:28.477748   80401 system_pods.go:61] "kube-scheduler-no-preload-066175" [ffa2475c-7a5a-4988-89a2-4727e07356cb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:40:28.477756   80401 system_pods.go:61] "metrics-server-78fcd8795b-mbtvd" [ccd7a565-52ef-49be-b659-31ae20af537a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:40:28.477761   80401 system_pods.go:61] "storage-provisioner" [19914ecc-2fcc-4cb8-bd78-fb6891dcf85d] Running
	I0717 18:40:28.477769   80401 system_pods.go:74] duration metric: took 12.536267ms to wait for pod list to return data ...
	I0717 18:40:28.477777   80401 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:40:28.482322   80401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:40:28.482348   80401 node_conditions.go:123] node cpu capacity is 2
	I0717 18:40:28.482368   80401 node_conditions.go:105] duration metric: took 4.585233ms to run NodePressure ...
	I0717 18:40:28.482387   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:28.768656   80401 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:40:28.773308   80401 kubeadm.go:739] kubelet initialised
	I0717 18:40:28.773330   80401 kubeadm.go:740] duration metric: took 4.654448ms waiting for restarted kubelet to initialise ...
	I0717 18:40:28.773338   80401 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:40:28.778778   80401 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:25.738071   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:25.738580   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:25.738611   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:25.738538   81910 retry.go:31] will retry after 1.505740804s: waiting for machine to come up
	I0717 18:40:27.246293   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:27.246651   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:27.246674   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:27.246606   81910 retry.go:31] will retry after 1.574253799s: waiting for machine to come up
	I0717 18:40:28.822159   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:28.822546   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:28.822597   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:28.822517   81910 retry.go:31] will retry after 2.132842884s: waiting for machine to come up
	I0717 18:40:25.307875   80857 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.144060111s)
	I0717 18:40:25.307903   80857 crio.go:469] duration metric: took 3.144169984s to extract the tarball
	I0717 18:40:25.307914   80857 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:40:25.354436   80857 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:25.404799   80857 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0717 18:40:25.404827   80857 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:40:25.404884   80857 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.404936   80857 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 18:40:25.404908   80857 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.404910   80857 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.404952   80857 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.404998   80857 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.405010   80857 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.406657   80857 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.406661   80857 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.406667   80857 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.406660   80857 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 18:40:25.406690   80857 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.407119   80857 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:25.619950   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 18:40:25.635075   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.641561   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.647362   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.648054   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.649684   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.664183   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.709163   80857 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 18:40:25.709227   80857 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 18:40:25.709275   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.760931   80857 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 18:40:25.760994   80857 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.761042   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.779324   80857 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 18:40:25.779378   80857 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.779429   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799052   80857 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 18:40:25.799097   80857 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.799106   80857 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 18:40:25.799131   80857 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 18:40:25.799190   80857 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.799233   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799136   80857 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.799148   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.799298   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.806973   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 18:40:25.807041   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 18:40:25.807066   80857 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 18:40:25.807095   80857 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.807126   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0717 18:40:25.807137   80857 ssh_runner.go:195] Run: which crictl
	I0717 18:40:25.807237   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0717 18:40:25.811025   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 18:40:25.811114   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 18:40:25.935792   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0717 18:40:25.935853   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0717 18:40:25.935863   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 18:40:25.935934   80857 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 18:40:25.935973   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0717 18:40:25.935996   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0717 18:40:25.940351   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0717 18:40:25.970107   80857 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0717 18:40:26.231894   80857 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:40:26.372230   80857 cache_images.go:92] duration metric: took 967.383323ms to LoadCachedImages
	W0717 18:40:26.372327   80857 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19283-14386/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0717 18:40:26.372346   80857 kubeadm.go:934] updating node { 192.168.39.128 8443 v1.20.0 crio true true} ...
	I0717 18:40:26.372517   80857 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-019549 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:26.372613   80857 ssh_runner.go:195] Run: crio config
	I0717 18:40:26.416155   80857 cni.go:84] Creating CNI manager for ""
	I0717 18:40:26.416181   80857 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:26.416196   80857 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:26.416229   80857 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.128 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-019549 NodeName:old-k8s-version-019549 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 18:40:26.416526   80857 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.128
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-019549"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.128
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.128"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:26.416595   80857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 18:40:26.426941   80857 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:26.427006   80857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:26.437810   80857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 18:40:26.460046   80857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:40:26.482521   80857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 18:40:26.502536   80857 ssh_runner.go:195] Run: grep 192.168.39.128	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:26.506513   80857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:26.520895   80857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:26.648931   80857 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:26.665278   80857 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549 for IP: 192.168.39.128
	I0717 18:40:26.665300   80857 certs.go:194] generating shared ca certs ...
	I0717 18:40:26.665329   80857 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:26.665508   80857 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:26.665561   80857 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:26.665574   80857 certs.go:256] generating profile certs ...
	I0717 18:40:26.665693   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/client.key
	I0717 18:40:26.665780   80857 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key.9c9b0a7e
	I0717 18:40:26.665836   80857 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key
	I0717 18:40:26.665998   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:26.666049   80857 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:26.666063   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:26.666095   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:26.666128   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:26.666167   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:26.666225   80857 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:26.667047   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:26.713984   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:26.742617   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:26.770441   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:26.795098   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 18:40:26.825038   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:26.861300   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:26.901664   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/old-k8s-version-019549/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:40:26.926357   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:26.948986   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:26.973248   80857 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:26.994642   80857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:27.010158   80857 ssh_runner.go:195] Run: openssl version
	I0717 18:40:27.015861   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:27.026221   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030496   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.030567   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:27.035862   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:27.046312   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:27.057117   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061775   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.061824   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:27.067535   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:27.079022   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:27.090009   80857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094688   80857 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.094768   80857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:27.100404   80857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:27.110653   80857 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:27.115117   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:27.120633   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:27.126070   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:27.131500   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:27.137035   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:27.142426   80857 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:27.147638   80857 kubeadm.go:392] StartCluster: {Name:old-k8s-version-019549 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-019549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.128 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:27.147756   80857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:27.147816   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.187433   80857 cri.go:89] found id: ""
	I0717 18:40:27.187498   80857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:27.197001   80857 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:27.197020   80857 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:27.197070   80857 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:27.206758   80857 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:27.207822   80857 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-019549" does not appear in /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:40:27.208505   80857 kubeconfig.go:62] /home/jenkins/minikube-integration/19283-14386/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-019549" cluster setting kubeconfig missing "old-k8s-version-019549" context setting]
	I0717 18:40:27.209497   80857 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:27.212786   80857 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:27.222612   80857 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.128
	I0717 18:40:27.222649   80857 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:27.222663   80857 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:27.222721   80857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:27.268127   80857 cri.go:89] found id: ""
	I0717 18:40:27.268205   80857 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:27.284334   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:27.293669   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:27.293691   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:27.293743   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:40:27.305348   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:27.305437   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:27.317749   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:40:27.328481   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:27.328547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:27.337574   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.346242   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:27.346299   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:27.354946   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:40:27.363296   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:27.363350   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:27.371925   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:27.384020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:27.571539   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:28.767574   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.19599736s)
	I0717 18:40:28.767612   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.011512   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.151980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:29.258796   80857 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:29.258886   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:29.759072   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.787614   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:33.285208   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:30.956634   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:30.957109   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:30.957140   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:30.957059   81910 retry.go:31] will retry after 3.31337478s: waiting for machine to come up
	I0717 18:40:34.272528   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:34.273063   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | unable to find current IP address of domain default-k8s-diff-port-022930 in network mk-default-k8s-diff-port-022930
	I0717 18:40:34.273094   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | I0717 18:40:34.273032   81910 retry.go:31] will retry after 3.207729964s: waiting for machine to come up
	I0717 18:40:30.259921   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:30.758948   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.258967   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:31.759872   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.259187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:32.759299   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.259080   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:33.759583   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.259740   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:34.759068   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.697183   80180 start.go:364] duration metric: took 48.129837953s to acquireMachinesLock for "embed-certs-527415"
	I0717 18:40:38.697248   80180 start.go:96] Skipping create...Using existing machine configuration
	I0717 18:40:38.697260   80180 fix.go:54] fixHost starting: 
	I0717 18:40:38.697680   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:40:38.697712   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:40:38.713575   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36711
	I0717 18:40:38.713926   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:40:38.714396   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:40:38.714422   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:40:38.714762   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:40:38.714949   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:38.715109   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:40:38.716552   80180 fix.go:112] recreateIfNeeded on embed-certs-527415: state=Stopped err=<nil>
	I0717 18:40:38.716574   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	W0717 18:40:38.716775   80180 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 18:40:38.718610   80180 out.go:177] * Restarting existing kvm2 VM for "embed-certs-527415" ...
	I0717 18:40:35.285888   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:36.285651   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:36.285676   80401 pod_ready.go:81] duration metric: took 7.506876819s for pod "coredns-5cfdc65f69-spj2w" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.285686   80401 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.292615   80401 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:36.292638   80401 pod_ready.go:81] duration metric: took 6.944487ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:36.292650   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:38.298338   80401 pod_ready.go:102] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:37.484312   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.484723   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has current primary IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.484740   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Found IP for machine: 192.168.50.245
	I0717 18:40:37.484753   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Reserving static IP address...
	I0717 18:40:37.485137   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-022930", mac: "52:54:00:5d:76:ae", ip: "192.168.50.245"} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.485161   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Reserved static IP address: 192.168.50.245
	I0717 18:40:37.485174   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | skip adding static IP to network mk-default-k8s-diff-port-022930 - found existing host DHCP lease matching {name: "default-k8s-diff-port-022930", mac: "52:54:00:5d:76:ae", ip: "192.168.50.245"}
	I0717 18:40:37.485191   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Getting to WaitForSSH function...
	I0717 18:40:37.485207   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Waiting for SSH to be available...
	I0717 18:40:37.487397   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.487767   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.487796   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.487899   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Using SSH client type: external
	I0717 18:40:37.487927   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa (-rw-------)
	I0717 18:40:37.487961   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:37.487973   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | About to run SSH command:
	I0717 18:40:37.487992   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | exit 0
	I0717 18:40:37.608746   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:37.609085   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetConfigRaw
	I0717 18:40:37.609739   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:37.612293   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.612668   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.612689   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.612936   81068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/config.json ...
	I0717 18:40:37.613176   81068 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:37.613194   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:37.613391   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.615483   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.615774   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.615804   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.615881   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.616038   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.616187   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.616306   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.616470   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.616676   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.616691   81068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:37.720971   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:37.721004   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.721307   81068 buildroot.go:166] provisioning hostname "default-k8s-diff-port-022930"
	I0717 18:40:37.721340   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.721654   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.724162   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.724507   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.724535   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.724712   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.724912   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.725090   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.725259   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.725430   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.725635   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.725651   81068 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-022930 && echo "default-k8s-diff-port-022930" | sudo tee /etc/hostname
	I0717 18:40:37.837366   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-022930
	
	I0717 18:40:37.837389   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.839920   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.840291   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.840325   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.840450   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:37.840654   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.840830   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:37.840970   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:37.841130   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:37.841344   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:37.841363   81068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-022930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-022930/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-022930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:37.948311   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:37.948343   81068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:37.948394   81068 buildroot.go:174] setting up certificates
	I0717 18:40:37.948406   81068 provision.go:84] configureAuth start
	I0717 18:40:37.948416   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetMachineName
	I0717 18:40:37.948732   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:37.951214   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.951548   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.951578   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.951693   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:37.953805   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.954086   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:37.954105   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:37.954250   81068 provision.go:143] copyHostCerts
	I0717 18:40:37.954318   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:37.954334   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:37.954401   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:37.954531   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:37.954542   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:37.954575   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:37.954657   81068 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:37.954667   81068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:37.954694   81068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:37.954758   81068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-022930 san=[127.0.0.1 192.168.50.245 default-k8s-diff-port-022930 localhost minikube]
	I0717 18:40:38.054084   81068 provision.go:177] copyRemoteCerts
	I0717 18:40:38.054136   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:38.054160   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.056841   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.057265   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.057300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.057483   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.057683   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.057839   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.057982   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.138206   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:38.163105   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 18:40:38.188449   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:38.214829   81068 provision.go:87] duration metric: took 266.409028ms to configureAuth
	I0717 18:40:38.214853   81068 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:38.215005   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:40:38.215068   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.217684   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.218010   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.218037   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.218247   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.218419   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.218573   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.218706   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.218874   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:38.219021   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:38.219039   81068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:38.471162   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:38.471191   81068 machine.go:97] duration metric: took 858.000457ms to provisionDockerMachine
	I0717 18:40:38.471206   81068 start.go:293] postStartSetup for "default-k8s-diff-port-022930" (driver="kvm2")
	I0717 18:40:38.471220   81068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:38.471247   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.471558   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:38.471590   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.474241   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.474673   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.474704   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.474868   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.475085   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.475245   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.475524   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.554800   81068 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:38.558601   81068 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:38.558624   81068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:38.558685   81068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:38.558769   81068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:38.558875   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:38.567664   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:38.589713   81068 start.go:296] duration metric: took 118.491854ms for postStartSetup
	I0717 18:40:38.589754   81068 fix.go:56] duration metric: took 19.496049651s for fixHost
	I0717 18:40:38.589777   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.592433   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.592813   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.592860   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.592989   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.593188   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.593368   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.593536   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.593738   81068 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:38.593937   81068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.50.245 22 <nil> <nil>}
	I0717 18:40:38.593955   81068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:38.697050   81068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241638.669121206
	
	I0717 18:40:38.697075   81068 fix.go:216] guest clock: 1721241638.669121206
	I0717 18:40:38.697085   81068 fix.go:229] Guest: 2024-07-17 18:40:38.669121206 +0000 UTC Remote: 2024-07-17 18:40:38.589759024 +0000 UTC m=+204.149894792 (delta=79.362182ms)
	I0717 18:40:38.697108   81068 fix.go:200] guest clock delta is within tolerance: 79.362182ms
	I0717 18:40:38.697118   81068 start.go:83] releasing machines lock for "default-k8s-diff-port-022930", held for 19.603450588s
	I0717 18:40:38.697143   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.697381   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:38.700059   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.700504   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.700529   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.700764   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701246   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701541   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:40:38.701619   81068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:38.701672   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.701777   81068 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:38.701797   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:40:38.704169   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704478   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.704503   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704657   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.704684   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.704849   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.705002   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.705164   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.705262   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:38.705300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:38.705496   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:40:38.705663   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:40:38.705817   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:40:38.705967   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:40:38.825607   81068 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:38.831484   81068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:38.972775   81068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:38.978446   81068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:38.978502   81068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:38.999160   81068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:38.999180   81068 start.go:495] detecting cgroup driver to use...
	I0717 18:40:38.999234   81068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:39.016133   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:39.029031   81068 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:39.029083   81068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:39.042835   81068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:39.056981   81068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:39.168521   81068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:39.306630   81068 docker.go:233] disabling docker service ...
	I0717 18:40:39.306704   81068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:39.320435   81068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:39.337780   81068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:35.259643   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:35.759432   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.259818   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:36.759627   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.259968   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:37.758933   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.259980   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:38.759776   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.259988   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:39.496847   81068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:39.627783   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:39.641684   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:39.659183   81068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:40:39.659250   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.669034   81068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:39.669100   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.678708   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.688822   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.699484   81068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:39.709505   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.720715   81068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.736510   81068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:39.746991   81068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:39.757265   81068 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:39.757320   81068 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:39.774777   81068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:39.789593   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:39.907377   81068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:40.039498   81068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:40.039592   81068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:40.044502   81068 start.go:563] Will wait 60s for crictl version
	I0717 18:40:40.044558   81068 ssh_runner.go:195] Run: which crictl
	I0717 18:40:40.048708   81068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:40.087738   81068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:40.087822   81068 ssh_runner.go:195] Run: crio --version
	I0717 18:40:40.115460   81068 ssh_runner.go:195] Run: crio --version
	I0717 18:40:40.150181   81068 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:40:38.719828   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Start
	I0717 18:40:38.720004   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring networks are active...
	I0717 18:40:38.720983   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring network default is active
	I0717 18:40:38.721537   80180 main.go:141] libmachine: (embed-certs-527415) Ensuring network mk-embed-certs-527415 is active
	I0717 18:40:38.721945   80180 main.go:141] libmachine: (embed-certs-527415) Getting domain xml...
	I0717 18:40:38.722654   80180 main.go:141] libmachine: (embed-certs-527415) Creating domain...
	I0717 18:40:40.007036   80180 main.go:141] libmachine: (embed-certs-527415) Waiting to get IP...
	I0717 18:40:40.007975   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.008511   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.008608   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.008495   82069 retry.go:31] will retry after 268.334211ms: waiting for machine to come up
	I0717 18:40:40.278129   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.278639   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.278670   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.278585   82069 retry.go:31] will retry after 350.00147ms: waiting for machine to come up
	I0717 18:40:40.630229   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:40.630819   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:40.630853   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:40.630768   82069 retry.go:31] will retry after 411.079615ms: waiting for machine to come up
	I0717 18:40:41.043232   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.043851   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.043880   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.043822   82069 retry.go:31] will retry after 387.726284ms: waiting for machine to come up
	I0717 18:40:41.433536   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.434058   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.434092   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.434005   82069 retry.go:31] will retry after 538.564385ms: waiting for machine to come up
	I0717 18:40:41.973917   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:41.974457   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:41.974489   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:41.974395   82069 retry.go:31] will retry after 778.576616ms: waiting for machine to come up
	I0717 18:40:42.754322   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:42.754872   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:42.754899   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:42.754837   82069 retry.go:31] will retry after 758.957234ms: waiting for machine to come up
	I0717 18:40:40.299673   80401 pod_ready.go:102] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:40.801297   80401 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.801325   80401 pod_ready.go:81] duration metric: took 4.508666316s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.801339   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.807354   80401 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.807372   80401 pod_ready.go:81] duration metric: took 6.024916ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.807380   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.812934   80401 pod_ready.go:92] pod "kube-proxy-tn5xn" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.812982   80401 pod_ready.go:81] duration metric: took 5.594378ms for pod "kube-proxy-tn5xn" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.812996   80401 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.817940   80401 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:40:40.817969   80401 pod_ready.go:81] duration metric: took 4.96427ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:40.817982   80401 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:42.825018   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:40.151220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetIP
	I0717 18:40:40.153791   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:40.154220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:40:40.154246   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:40:40.154472   81068 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:40.159310   81068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:40.172121   81068 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:40.172256   81068 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:40:40.172307   81068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:40.215863   81068 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:40:40.215940   81068 ssh_runner.go:195] Run: which lz4
	I0717 18:40:40.220502   81068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:40.224682   81068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:40.224714   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:40:41.511505   81068 crio.go:462] duration metric: took 1.291039238s to copy over tarball
	I0717 18:40:41.511574   81068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:40:43.730839   81068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.219230444s)
	I0717 18:40:43.730901   81068 crio.go:469] duration metric: took 2.219370372s to extract the tarball
	I0717 18:40:43.730912   81068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:40:43.767876   81068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:43.809466   81068 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:40:43.809494   81068 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:40:43.809505   81068 kubeadm.go:934] updating node { 192.168.50.245 8444 v1.30.2 crio true true} ...
	I0717 18:40:43.809646   81068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-022930 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:40:43.809740   81068 ssh_runner.go:195] Run: crio config
	I0717 18:40:43.850614   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:40:43.850635   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:43.850648   81068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:40:43.850669   81068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.245 APIServerPort:8444 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-022930 NodeName:default-k8s-diff-port-022930 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:40:43.850795   81068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.245
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-022930"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:40:43.850851   81068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:40:43.862674   81068 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:40:43.862733   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:40:43.873304   81068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0717 18:40:43.888884   81068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:40:43.903631   81068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0717 18:40:43.918768   81068 ssh_runner.go:195] Run: grep 192.168.50.245	control-plane.minikube.internal$ /etc/hosts
	I0717 18:40:43.922033   81068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:43.932546   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:44.049621   81068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:40:44.065718   81068 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930 for IP: 192.168.50.245
	I0717 18:40:44.065747   81068 certs.go:194] generating shared ca certs ...
	I0717 18:40:44.065767   81068 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:40:44.065939   81068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:40:44.065999   81068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:40:44.066016   81068 certs.go:256] generating profile certs ...
	I0717 18:40:44.066149   81068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/client.key
	I0717 18:40:44.066224   81068 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.key.8aa7f0a0
	I0717 18:40:44.066284   81068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.key
	I0717 18:40:44.066445   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:40:44.066494   81068 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:40:44.066507   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:40:44.066548   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:40:44.066579   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:40:44.066606   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:40:44.066650   81068 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:44.067421   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:40:44.104160   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:40:44.133716   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:40:44.161170   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:40:44.190489   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0717 18:40:44.211792   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:40:44.232875   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:40:44.255059   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/default-k8s-diff-port-022930/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:40:44.276826   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:40:44.298357   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:40:44.320634   81068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:40:44.345428   81068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:40:44.362934   81068 ssh_runner.go:195] Run: openssl version
	I0717 18:40:44.369764   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:40:44.382557   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.386445   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.386483   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:40:44.392033   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:40:44.401987   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:40:44.411437   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.415367   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.415419   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:40:44.420523   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:40:44.429915   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:40:44.439371   81068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.443248   81068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.443301   81068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:40:44.448380   81068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:40:44.457828   81068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:40:44.462151   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:40:44.467474   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:40:44.472829   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:40:40.259910   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:40.759917   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.259718   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:41.759839   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.259129   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:42.759772   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.259989   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.759724   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.258978   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:44.759594   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:43.515097   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:43.515595   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:43.515616   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:43.515539   82069 retry.go:31] will retry after 1.173590835s: waiting for machine to come up
	I0717 18:40:44.691027   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:44.691479   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:44.691520   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:44.691428   82069 retry.go:31] will retry after 1.594704966s: waiting for machine to come up
	I0717 18:40:46.288022   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:46.288609   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:46.288642   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:46.288549   82069 retry.go:31] will retry after 2.014912325s: waiting for machine to come up
	I0717 18:40:45.323815   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:47.324715   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:44.478397   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:40:44.483860   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:40:44.489029   81068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:40:44.494220   81068 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-022930 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.2 ClusterName:default-k8s-diff-port-022930 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:40:44.494329   81068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:40:44.494381   81068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:44.534380   81068 cri.go:89] found id: ""
	I0717 18:40:44.534445   81068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:40:44.545270   81068 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:40:44.545287   81068 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:40:44.545328   81068 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:40:44.555521   81068 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:40:44.556584   81068 kubeconfig.go:125] found "default-k8s-diff-port-022930" server: "https://192.168.50.245:8444"
	I0717 18:40:44.558675   81068 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:40:44.567696   81068 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.245
	I0717 18:40:44.567727   81068 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:40:44.567739   81068 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:40:44.567787   81068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:40:44.605757   81068 cri.go:89] found id: ""
	I0717 18:40:44.605833   81068 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:40:44.622187   81068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:40:44.631169   81068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:40:44.631191   81068 kubeadm.go:157] found existing configuration files:
	
	I0717 18:40:44.631241   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 18:40:44.639194   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:40:44.639248   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:40:44.647542   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 18:40:44.655622   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:40:44.655708   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:40:44.663923   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 18:40:44.671733   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:40:44.671778   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:40:44.680375   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 18:40:44.688043   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:40:44.688085   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:40:44.697020   81068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:40:44.705554   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:44.812051   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.351683   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.559471   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.618086   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:45.678836   81068 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:40:45.678926   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.179998   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.679083   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.179084   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.679042   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.179150   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.195192   81068 api_server.go:72] duration metric: took 2.516354411s to wait for apiserver process to appear ...
	I0717 18:40:48.195222   81068 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:40:48.195247   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:45.259185   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:45.759765   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.259009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:46.759131   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.259477   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:47.759386   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.259977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:48.759374   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.259744   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:49.759440   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.393650   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:50.393688   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:50.393705   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:50.467974   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:40:50.468000   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:40:50.696340   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:50.702264   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:50.702308   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:51.195503   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:51.200034   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:40:51.200060   81068 api_server.go:103] status: https://192.168.50.245:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:40:51.695594   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:40:51.699593   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 200:
	ok
	I0717 18:40:51.706025   81068 api_server.go:141] control plane version: v1.30.2
	I0717 18:40:51.706048   81068 api_server.go:131] duration metric: took 3.510818337s to wait for apiserver health ...
	I0717 18:40:51.706059   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:40:51.706067   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:40:51.707696   81068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:40:48.305798   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:48.306290   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:48.306323   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:48.306232   82069 retry.go:31] will retry after 1.789943402s: waiting for machine to come up
	I0717 18:40:50.098279   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:50.098771   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:50.098798   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:50.098734   82069 retry.go:31] will retry after 2.765766483s: waiting for machine to come up
	I0717 18:40:52.867667   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:52.868191   80180 main.go:141] libmachine: (embed-certs-527415) DBG | unable to find current IP address of domain embed-certs-527415 in network mk-embed-certs-527415
	I0717 18:40:52.868212   80180 main.go:141] libmachine: (embed-certs-527415) DBG | I0717 18:40:52.868139   82069 retry.go:31] will retry after 2.762670644s: waiting for machine to come up
	I0717 18:40:49.325415   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:51.824015   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:53.824980   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:51.708887   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:40:51.718704   81068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:40:51.735711   81068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:40:51.745976   81068 system_pods.go:59] 8 kube-system pods found
	I0717 18:40:51.746009   81068 system_pods.go:61] "coredns-7db6d8ff4d-czk4x" [80cedf0b-248a-458e-994c-81f852d78076] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:40:51.746022   81068 system_pods.go:61] "etcd-default-k8s-diff-port-022930" [f9cf97bf-5fdc-4623-a78c-d29e0352ce40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:40:51.746036   81068 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-022930" [599cef4d-2b4d-4cd5-9552-99de585759eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:40:51.746051   81068 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-022930" [89092470-6fc9-47b2-b680-7c93945d9005] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:40:51.746062   81068 system_pods.go:61] "kube-proxy-hj7ss" [d260f18e-7a01-4f07-8c6a-87e8f6329f79] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 18:40:51.746074   81068 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-022930" [fe098478-fcb6-4084-b773-11c2cbb995aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:40:51.746083   81068 system_pods.go:61] "metrics-server-569cc877fc-j9qhx" [18efb008-e7d3-435e-9156-57c16b454d07] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:40:51.746093   81068 system_pods.go:61] "storage-provisioner" [ac856758-62ca-485f-aa31-5cd1c7d1dbe5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:40:51.746103   81068 system_pods.go:74] duration metric: took 10.373616ms to wait for pod list to return data ...
	I0717 18:40:51.746115   81068 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:40:51.749151   81068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:40:51.749173   81068 node_conditions.go:123] node cpu capacity is 2
	I0717 18:40:51.749185   81068 node_conditions.go:105] duration metric: took 3.061813ms to run NodePressure ...
	I0717 18:40:51.749204   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:40:52.049486   81068 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:40:52.053636   81068 kubeadm.go:739] kubelet initialised
	I0717 18:40:52.053656   81068 kubeadm.go:740] duration metric: took 4.136528ms waiting for restarted kubelet to initialise ...
	I0717 18:40:52.053665   81068 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:40:52.058401   81068 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.062406   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.062429   81068 pod_ready.go:81] duration metric: took 4.007504ms for pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.062439   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "coredns-7db6d8ff4d-czk4x" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.062454   81068 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.066161   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.066185   81068 pod_ready.go:81] duration metric: took 3.717781ms for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.066202   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.066212   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:52.070043   81068 pod_ready.go:97] node "default-k8s-diff-port-022930" hosting pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.070064   81068 pod_ready.go:81] duration metric: took 3.840533ms for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	E0717 18:40:52.070074   81068 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-022930" hosting pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-022930" has status "Ready":"False"
	I0717 18:40:52.070080   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:40:54.077110   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:50.258977   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:50.758964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.259867   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:51.759826   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.259016   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:52.759708   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.259589   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:53.759788   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.259753   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:54.759841   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.633531   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.633999   80180 main.go:141] libmachine: (embed-certs-527415) Found IP for machine: 192.168.61.90
	I0717 18:40:55.634014   80180 main.go:141] libmachine: (embed-certs-527415) Reserving static IP address...
	I0717 18:40:55.634026   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has current primary IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.634407   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.634438   80180 main.go:141] libmachine: (embed-certs-527415) Reserved static IP address: 192.168.61.90
	I0717 18:40:55.634456   80180 main.go:141] libmachine: (embed-certs-527415) DBG | skip adding static IP to network mk-embed-certs-527415 - found existing host DHCP lease matching {name: "embed-certs-527415", mac: "52:54:00:4e:52:9a", ip: "192.168.61.90"}
	I0717 18:40:55.634476   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Getting to WaitForSSH function...
	I0717 18:40:55.634490   80180 main.go:141] libmachine: (embed-certs-527415) Waiting for SSH to be available...
	I0717 18:40:55.636604   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.636877   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.636904   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.637010   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH client type: external
	I0717 18:40:55.637032   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Using SSH private key: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa (-rw-------)
	I0717 18:40:55.637063   80180 main.go:141] libmachine: (embed-certs-527415) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.90 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0717 18:40:55.637082   80180 main.go:141] libmachine: (embed-certs-527415) DBG | About to run SSH command:
	I0717 18:40:55.637094   80180 main.go:141] libmachine: (embed-certs-527415) DBG | exit 0
	I0717 18:40:55.765208   80180 main.go:141] libmachine: (embed-certs-527415) DBG | SSH cmd err, output: <nil>: 
	I0717 18:40:55.765554   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetConfigRaw
	I0717 18:40:55.766322   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:55.769331   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.769800   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.769827   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.770203   80180 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/config.json ...
	I0717 18:40:55.770593   80180 machine.go:94] provisionDockerMachine start ...
	I0717 18:40:55.770620   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:55.770826   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:55.773837   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.774313   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.774346   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.774553   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:55.774750   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.774909   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.775060   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:55.775277   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:55.775534   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:55.775556   80180 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 18:40:55.888982   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0717 18:40:55.889013   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:55.889259   80180 buildroot.go:166] provisioning hostname "embed-certs-527415"
	I0717 18:40:55.889286   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:55.889501   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:55.891900   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.892284   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:55.892302   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:55.892532   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:55.892701   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.892853   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:55.892993   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:55.893136   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:55.893293   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:55.893310   80180 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-527415 && echo "embed-certs-527415" | sudo tee /etc/hostname
	I0717 18:40:56.018869   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-527415
	
	I0717 18:40:56.018898   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.021591   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.021888   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.021909   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.022286   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.022489   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.022646   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.022765   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.022905   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.023050   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.023066   80180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-527415' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-527415/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-527415' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:40:56.146411   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:40:56.146455   80180 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19283-14386/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-14386/.minikube}
	I0717 18:40:56.146478   80180 buildroot.go:174] setting up certificates
	I0717 18:40:56.146490   80180 provision.go:84] configureAuth start
	I0717 18:40:56.146502   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetMachineName
	I0717 18:40:56.146767   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:56.149369   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.149725   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.149755   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.149937   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.152431   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.152753   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.152774   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.152936   80180 provision.go:143] copyHostCerts
	I0717 18:40:56.153028   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem, removing ...
	I0717 18:40:56.153041   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem
	I0717 18:40:56.153096   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/ca.pem (1082 bytes)
	I0717 18:40:56.153186   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem, removing ...
	I0717 18:40:56.153194   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem
	I0717 18:40:56.153214   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/cert.pem (1123 bytes)
	I0717 18:40:56.153277   80180 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem, removing ...
	I0717 18:40:56.153283   80180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem
	I0717 18:40:56.153300   80180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-14386/.minikube/key.pem (1679 bytes)
	I0717 18:40:56.153349   80180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem org=jenkins.embed-certs-527415 san=[127.0.0.1 192.168.61.90 embed-certs-527415 localhost minikube]
	I0717 18:40:56.326978   80180 provision.go:177] copyRemoteCerts
	I0717 18:40:56.327024   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:40:56.327045   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.329432   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.329778   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.329809   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.329927   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.330121   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.330295   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.330409   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.415173   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0717 18:40:56.438501   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 18:40:56.460520   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:40:56.481808   80180 provision.go:87] duration metric: took 335.305142ms to configureAuth
	I0717 18:40:56.481832   80180 buildroot.go:189] setting minikube options for container-runtime
	I0717 18:40:56.482001   80180 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:40:56.482063   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.484653   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.485044   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.485074   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.485222   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.485468   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.485652   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.485810   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.485953   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.486108   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.486123   80180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:40:56.741135   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:40:56.741185   80180 machine.go:97] duration metric: took 970.573336ms to provisionDockerMachine
	I0717 18:40:56.741204   80180 start.go:293] postStartSetup for "embed-certs-527415" (driver="kvm2")
	I0717 18:40:56.741221   80180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:40:56.741245   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.741597   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:40:56.741625   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.744356   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.744805   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.744831   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.745025   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.745224   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.745382   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.745549   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.835435   80180 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:40:56.839724   80180 info.go:137] Remote host: Buildroot 2023.02.9
	I0717 18:40:56.839753   80180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/addons for local assets ...
	I0717 18:40:56.839834   80180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-14386/.minikube/files for local assets ...
	I0717 18:40:56.839945   80180 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem -> 215772.pem in /etc/ssl/certs
	I0717 18:40:56.840083   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:40:56.849582   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:40:56.872278   80180 start.go:296] duration metric: took 131.057656ms for postStartSetup
	I0717 18:40:56.872347   80180 fix.go:56] duration metric: took 18.175085798s for fixHost
	I0717 18:40:56.872375   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.874969   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.875308   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.875340   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.875533   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.875722   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.875955   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.876089   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.876274   80180 main.go:141] libmachine: Using SSH client type: native
	I0717 18:40:56.876459   80180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da20] 0x830780 <nil>  [] 0s} 192.168.61.90 22 <nil> <nil>}
	I0717 18:40:56.876469   80180 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0717 18:40:56.985888   80180 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721241656.959508652
	
	I0717 18:40:56.985907   80180 fix.go:216] guest clock: 1721241656.959508652
	I0717 18:40:56.985914   80180 fix.go:229] Guest: 2024-07-17 18:40:56.959508652 +0000 UTC Remote: 2024-07-17 18:40:56.872354453 +0000 UTC m=+348.896679896 (delta=87.154199ms)
	I0717 18:40:56.985939   80180 fix.go:200] guest clock delta is within tolerance: 87.154199ms
	I0717 18:40:56.985944   80180 start.go:83] releasing machines lock for "embed-certs-527415", held for 18.288718042s
	I0717 18:40:56.985964   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.986210   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:56.988716   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.989086   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.989114   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.989279   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.989786   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.989966   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:40:56.990055   80180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:40:56.990092   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.990360   80180 ssh_runner.go:195] Run: cat /version.json
	I0717 18:40:56.990390   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:40:56.992519   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992816   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.992835   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992852   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.992984   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.993162   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.993212   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:56.993234   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:56.993356   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.993401   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:40:56.993499   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:56.993541   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:40:56.993754   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:40:56.993915   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:40:57.116598   80180 ssh_runner.go:195] Run: systemctl --version
	I0717 18:40:57.122546   80180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:40:57.268379   80180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 18:40:57.274748   80180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 18:40:57.274819   80180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:40:57.290374   80180 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 18:40:57.290394   80180 start.go:495] detecting cgroup driver to use...
	I0717 18:40:57.290443   80180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:40:57.307521   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:40:57.323478   80180 docker.go:217] disabling cri-docker service (if available) ...
	I0717 18:40:57.323554   80180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:40:57.337078   80180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:40:57.350181   80180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:40:57.463512   80180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:40:57.626650   80180 docker.go:233] disabling docker service ...
	I0717 18:40:57.626714   80180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:40:57.641067   80180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:40:57.655085   80180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:40:57.802789   80180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:40:57.919140   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:40:57.932620   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:40:57.949471   80180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:40:57.949528   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.960297   80180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:40:57.960366   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.970890   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.980768   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:57.990723   80180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:40:58.000791   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.010332   80180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.026611   80180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:40:58.036106   80180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:40:58.044742   80180 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0717 18:40:58.044791   80180 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0717 18:40:58.056584   80180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:40:58.065470   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:40:58.182119   80180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:40:58.319330   80180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:40:58.319400   80180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:40:58.326361   80180 start.go:563] Will wait 60s for crictl version
	I0717 18:40:58.326405   80180 ssh_runner.go:195] Run: which crictl
	I0717 18:40:58.329951   80180 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:40:58.366561   80180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0717 18:40:58.366668   80180 ssh_runner.go:195] Run: crio --version
	I0717 18:40:58.398483   80180 ssh_runner.go:195] Run: crio --version
	I0717 18:40:58.427421   80180 out.go:177] * Preparing Kubernetes v1.30.2 on CRI-O 1.29.1 ...
	I0717 18:40:56.324834   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:58.325283   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:56.077315   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:58.077815   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:40:55.259450   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:55.759932   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.259395   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:56.759855   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.259739   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:57.759436   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.258951   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.759931   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.259588   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:59.759651   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:40:58.428872   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetIP
	I0717 18:40:58.431182   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:58.431554   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:40:58.431580   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:40:58.431756   80180 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0717 18:40:58.435914   80180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:40:58.448777   80180 kubeadm.go:883] updating cluster {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 18:40:58.448923   80180 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 18:40:58.449018   80180 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:40:58.488011   80180 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.2". assuming images are not preloaded.
	I0717 18:40:58.488077   80180 ssh_runner.go:195] Run: which lz4
	I0717 18:40:58.491828   80180 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:40:58.495609   80180 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:40:58.495640   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (395071426 bytes)
	I0717 18:40:59.686445   80180 crio.go:462] duration metric: took 1.194619366s to copy over tarball
	I0717 18:40:59.686513   80180 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:41:01.862679   80180 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.176132338s)
	I0717 18:41:01.862710   80180 crio.go:469] duration metric: took 2.176236509s to extract the tarball
	I0717 18:41:01.862719   80180 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:41:01.901813   80180 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:41:01.945403   80180 crio.go:514] all images are preloaded for cri-o runtime.
	I0717 18:41:01.945429   80180 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:41:01.945438   80180 kubeadm.go:934] updating node { 192.168.61.90 8443 v1.30.2 crio true true} ...
	I0717 18:41:01.945554   80180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-527415 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 18:41:01.945631   80180 ssh_runner.go:195] Run: crio config
	I0717 18:41:01.991102   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:41:01.991130   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:41:01.991144   80180 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 18:41:01.991168   80180 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.90 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-527415 NodeName:embed-certs-527415 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:41:01.991331   80180 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-527415"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:41:01.991397   80180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 18:41:02.001007   80180 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:41:02.001082   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:41:02.010130   80180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0717 18:41:02.025405   80180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:41:02.041167   80180 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0717 18:41:02.057441   80180 ssh_runner.go:195] Run: grep 192.168.61.90	control-plane.minikube.internal$ /etc/hosts
	I0717 18:41:02.060878   80180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.90	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:41:02.072984   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:41:02.188194   80180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:41:02.204599   80180 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415 for IP: 192.168.61.90
	I0717 18:41:02.204623   80180 certs.go:194] generating shared ca certs ...
	I0717 18:41:02.204643   80180 certs.go:226] acquiring lock for ca certs: {Name:mkc28fdf3682821f5875d0a18529b464529e842e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:41:02.204822   80180 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key
	I0717 18:41:02.204885   80180 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key
	I0717 18:41:02.204899   80180 certs.go:256] generating profile certs ...
	I0717 18:41:02.205047   80180 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/client.key
	I0717 18:41:02.205129   80180 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key.f26848e9
	I0717 18:41:02.205188   80180 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key
	I0717 18:41:02.205372   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem (1338 bytes)
	W0717 18:41:02.205436   80180 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577_empty.pem, impossibly tiny 0 bytes
	I0717 18:41:02.205451   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca-key.pem (1675 bytes)
	I0717 18:41:02.205486   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/ca.pem (1082 bytes)
	I0717 18:41:02.205526   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:41:02.205556   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/certs/key.pem (1679 bytes)
	I0717 18:41:02.205612   80180 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem (1708 bytes)
	I0717 18:41:02.206441   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:41:02.234135   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 18:41:02.259780   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:41:02.285464   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 18:41:02.316267   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0717 18:41:02.348835   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:41:02.375505   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:41:02.402683   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/embed-certs-527415/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:41:02.426689   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:41:02.449328   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/certs/21577.pem --> /usr/share/ca-certificates/21577.pem (1338 bytes)
	I0717 18:41:02.472140   80180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/ssl/certs/215772.pem --> /usr/share/ca-certificates/215772.pem (1708 bytes)
	I0717 18:41:02.494016   80180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:41:02.512612   80180 ssh_runner.go:195] Run: openssl version
	I0717 18:41:02.519908   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:41:02.532706   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.538136   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 17:12 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.538191   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:41:02.545493   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:41:02.558832   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21577.pem && ln -fs /usr/share/ca-certificates/21577.pem /etc/ssl/certs/21577.pem"
	I0717 18:41:02.570455   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.575515   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 17:25 /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.575582   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21577.pem
	I0717 18:41:02.581428   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21577.pem /etc/ssl/certs/51391683.0"
	I0717 18:41:02.592439   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/215772.pem && ln -fs /usr/share/ca-certificates/215772.pem /etc/ssl/certs/215772.pem"
	I0717 18:41:02.602823   80180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.608370   80180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 17:25 /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.608433   80180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/215772.pem
	I0717 18:41:02.615367   80180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/215772.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:41:02.628355   80180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 18:41:02.632772   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 18:41:02.638325   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 18:41:02.643635   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 18:41:02.648960   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 18:41:02.654088   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 18:41:02.659220   80180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 18:41:02.664325   80180 kubeadm.go:392] StartCluster: {Name:embed-certs-527415 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:embed-certs-527415 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 18:41:02.664444   80180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:41:02.664495   80180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:41:02.699590   80180 cri.go:89] found id: ""
	I0717 18:41:02.699676   80180 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:41:02.709427   80180 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 18:41:02.709452   80180 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 18:41:02.709503   80180 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 18:41:02.718489   80180 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:41:02.719505   80180 kubeconfig.go:125] found "embed-certs-527415" server: "https://192.168.61.90:8443"
	I0717 18:41:02.721457   80180 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 18:41:02.730258   80180 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.90
	I0717 18:41:02.730288   80180 kubeadm.go:1160] stopping kube-system containers ...
	I0717 18:41:02.730301   80180 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 18:41:02.730367   80180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:41:02.768268   80180 cri.go:89] found id: ""
	I0717 18:41:02.768339   80180 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 18:41:02.786699   80180 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:41:02.796888   80180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:41:02.796912   80180 kubeadm.go:157] found existing configuration files:
	
	I0717 18:41:02.796965   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:41:02.805633   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:41:02.805703   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:41:02.817624   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:41:02.827840   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:41:02.827902   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:41:02.836207   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:41:02.844201   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:41:02.844265   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:41:02.852667   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:41:02.860697   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:41:02.860741   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:41:02.869133   80180 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:41:02.877992   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:02.986350   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:00.823447   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:02.825375   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:00.578095   81068 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:02.576899   81068 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.576927   81068 pod_ready.go:81] duration metric: took 10.506835962s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.576953   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hj7ss" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.584912   81068 pod_ready.go:92] pod "kube-proxy-hj7ss" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.584933   81068 pod_ready.go:81] duration metric: took 7.972079ms for pod "kube-proxy-hj7ss" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.584964   81068 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.590342   81068 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:02.590366   81068 pod_ready.go:81] duration metric: took 5.392364ms for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:02.590380   81068 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:00.259461   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:00.759148   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.259596   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:01.759943   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.259670   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:02.759900   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.259745   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.759843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.259902   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.759850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:03.874112   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.091026   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.170734   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:04.292719   80180 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:41:04.292826   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:04.793710   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.292924   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.792872   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.293626   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.793632   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.810658   80180 api_server.go:72] duration metric: took 2.517938682s to wait for apiserver process to appear ...
	I0717 18:41:06.810685   80180 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:41:06.810705   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:05.323684   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:07.324653   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:04.596794   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:06.597411   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:09.097409   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:05.259624   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:05.759258   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.259346   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:06.759041   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.259467   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:07.759164   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.259047   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:08.759959   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.259372   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.759259   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:09.612683   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:41:09.612715   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:41:09.612728   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:09.633949   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 18:41:09.633975   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 18:41:09.811272   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:09.815690   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:09.815720   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:10.311256   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:10.319587   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:10.319620   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:10.811133   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:10.815819   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:10.815862   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:11.311037   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:11.315892   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:11.315923   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:11.811534   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:11.816601   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:11.816631   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:12.311178   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:12.315484   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:12.315510   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:12.811068   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:12.821016   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0717 18:41:12.821048   80180 api_server.go:103] status: https://192.168.61.90:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0717 18:41:13.311166   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:41:13.315879   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:41:13.322661   80180 api_server.go:141] control plane version: v1.30.2
	I0717 18:41:13.322700   80180 api_server.go:131] duration metric: took 6.512007091s to wait for apiserver health ...
	I0717 18:41:13.322713   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:41:13.322722   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:41:13.324516   80180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:41:09.325535   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:11.325697   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:13.327238   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:11.597479   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:14.098908   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:10.259845   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:10.759671   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.259895   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:11.759877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.259003   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:12.759685   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.759844   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.259541   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:14.759709   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:13.325935   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:41:13.337601   80180 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:41:13.354366   80180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:41:13.364678   80180 system_pods.go:59] 8 kube-system pods found
	I0717 18:41:13.364715   80180 system_pods.go:61] "coredns-7db6d8ff4d-2fnlb" [86d50e9b-fb88-4332-90c5-a969b0654635] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:41:13.364726   80180 system_pods.go:61] "etcd-embed-certs-527415" [9d8ac0a8-4639-48d8-8ac4-88b0bd1e2082] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 18:41:13.364735   80180 system_pods.go:61] "kube-apiserver-embed-certs-527415" [7f72c4f9-f1db-4ac6-83e1-2b94245107c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 18:41:13.364743   80180 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [96081a97-2a90-4fec-84cb-9a399a43aeb4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 18:41:13.364752   80180 system_pods.go:61] "kube-proxy-jltfs" [27f6259e-80cc-4881-bb06-6a2ad529179c] Running
	I0717 18:41:13.364763   80180 system_pods.go:61] "kube-scheduler-embed-certs-527415" [bed7b515-7ab0-460c-a13f-037f29576f30] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 18:41:13.364775   80180 system_pods.go:61] "metrics-server-569cc877fc-8md44" [1b9d50c8-6ca0-41c3-92d9-eebdccbf1a82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:41:13.364783   80180 system_pods.go:61] "storage-provisioner" [ccb34b69-d28d-477e-8c7a-0acdc547bec7] Running
	I0717 18:41:13.364791   80180 system_pods.go:74] duration metric: took 10.40947ms to wait for pod list to return data ...
	I0717 18:41:13.364803   80180 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:41:13.367687   80180 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:41:13.367712   80180 node_conditions.go:123] node cpu capacity is 2
	I0717 18:41:13.367725   80180 node_conditions.go:105] duration metric: took 2.912986ms to run NodePressure ...
	I0717 18:41:13.367745   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 18:41:13.630827   80180 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0717 18:41:13.636658   80180 kubeadm.go:739] kubelet initialised
	I0717 18:41:13.636688   80180 kubeadm.go:740] duration metric: took 5.830484ms waiting for restarted kubelet to initialise ...
	I0717 18:41:13.636699   80180 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:41:13.642171   80180 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.650539   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.650573   80180 pod_ready.go:81] duration metric: took 8.374432ms for pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.650585   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "coredns-7db6d8ff4d-2fnlb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.650599   80180 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.655470   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "etcd-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.655500   80180 pod_ready.go:81] duration metric: took 4.8911ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.655512   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "etcd-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.655520   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.662448   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.662479   80180 pod_ready.go:81] duration metric: took 6.949002ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.662490   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.662499   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:13.757454   80180 pod_ready.go:97] node "embed-certs-527415" hosting pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.757485   80180 pod_ready.go:81] duration metric: took 94.976348ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	E0717 18:41:13.757494   80180 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-527415" hosting pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-527415" has status "Ready":"False"
	I0717 18:41:13.757501   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:14.157339   80180 pod_ready.go:92] pod "kube-proxy-jltfs" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:14.157363   80180 pod_ready.go:81] duration metric: took 399.852649ms for pod "kube-proxy-jltfs" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:14.157381   80180 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:16.163623   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:15.825045   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:18.323440   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:16.596320   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:18.596807   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:15.259558   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:15.759585   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.259850   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:16.760009   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.259385   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:17.759208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.259218   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.759779   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.259666   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:19.759781   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:18.174371   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:20.664423   80180 pod_ready.go:102] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:22.663932   80180 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:41:22.663955   80180 pod_ready.go:81] duration metric: took 8.506565077s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:22.663969   80180 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" ...
	I0717 18:41:20.324547   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:22.824318   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:21.096071   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:23.596775   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:20.259286   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:20.759048   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.259801   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:21.759595   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.259582   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:22.759871   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.259349   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:23.759659   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.259964   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.759899   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:24.671105   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:27.170247   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:24.825017   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:26.825067   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:26.096196   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:28.097501   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:25.259559   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:25.759773   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.259038   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:26.759924   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.259509   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:27.759986   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.259792   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:28.759564   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:29.259060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:29.259143   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:29.298974   80857 cri.go:89] found id: ""
	I0717 18:41:29.299006   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.299016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:29.299024   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:29.299087   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:29.333764   80857 cri.go:89] found id: ""
	I0717 18:41:29.333786   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.333793   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:29.333801   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:29.333849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:29.369639   80857 cri.go:89] found id: ""
	I0717 18:41:29.369674   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.369688   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:29.369697   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:29.369762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:29.403453   80857 cri.go:89] found id: ""
	I0717 18:41:29.403481   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.403489   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:29.403498   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:29.403555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:29.436662   80857 cri.go:89] found id: ""
	I0717 18:41:29.436687   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.436695   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:29.436701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:29.436749   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:29.471013   80857 cri.go:89] found id: ""
	I0717 18:41:29.471053   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.471064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:29.471074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:29.471139   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:29.502754   80857 cri.go:89] found id: ""
	I0717 18:41:29.502780   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.502787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:29.502793   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:29.502842   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:29.534205   80857 cri.go:89] found id: ""
	I0717 18:41:29.534232   80857 logs.go:276] 0 containers: []
	W0717 18:41:29.534239   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:29.534247   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:29.534259   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:29.585406   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:29.585438   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:29.600629   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:29.600660   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:29.719788   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:29.719807   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:29.719819   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:29.785626   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:29.785662   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:29.669918   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:31.670544   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:29.325013   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:31.828532   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:30.097685   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:32.596760   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:32.325522   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:32.338046   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:32.338120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:32.370073   80857 cri.go:89] found id: ""
	I0717 18:41:32.370099   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.370106   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:32.370112   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:32.370165   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:32.408764   80857 cri.go:89] found id: ""
	I0717 18:41:32.408789   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.408799   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:32.408806   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:32.408862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:32.449078   80857 cri.go:89] found id: ""
	I0717 18:41:32.449108   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.449118   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:32.449125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:32.449176   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:32.481990   80857 cri.go:89] found id: ""
	I0717 18:41:32.482015   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.482022   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:32.482028   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:32.482077   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:32.521902   80857 cri.go:89] found id: ""
	I0717 18:41:32.521932   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.521942   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:32.521949   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:32.521997   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:32.554148   80857 cri.go:89] found id: ""
	I0717 18:41:32.554177   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.554206   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:32.554216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:32.554270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:32.587342   80857 cri.go:89] found id: ""
	I0717 18:41:32.587366   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.587374   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:32.587379   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:32.587425   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:32.619227   80857 cri.go:89] found id: ""
	I0717 18:41:32.619259   80857 logs.go:276] 0 containers: []
	W0717 18:41:32.619270   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:32.619281   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:32.619296   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:32.669085   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:32.669124   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:32.682464   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:32.682500   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:32.749218   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:32.749234   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:32.749245   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:32.814510   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:32.814545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:33.670578   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.670952   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:37.671373   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:34.324458   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:36.823615   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:38.825194   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.096041   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:37.096436   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:39.096906   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:35.362866   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:35.375563   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:35.375643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:35.412355   80857 cri.go:89] found id: ""
	I0717 18:41:35.412380   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.412388   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:35.412393   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:35.412439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:35.446596   80857 cri.go:89] found id: ""
	I0717 18:41:35.446621   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.446629   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:35.446634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:35.446691   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:35.481695   80857 cri.go:89] found id: ""
	I0717 18:41:35.481717   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.481725   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:35.481730   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:35.481783   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:35.514528   80857 cri.go:89] found id: ""
	I0717 18:41:35.514573   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.514584   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:35.514592   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:35.514657   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:35.547831   80857 cri.go:89] found id: ""
	I0717 18:41:35.547858   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.547871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:35.547879   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:35.547941   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:35.579059   80857 cri.go:89] found id: ""
	I0717 18:41:35.579084   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.579097   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:35.579104   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:35.579164   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:35.616442   80857 cri.go:89] found id: ""
	I0717 18:41:35.616480   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.616487   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:35.616492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:35.616545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:35.647535   80857 cri.go:89] found id: ""
	I0717 18:41:35.647564   80857 logs.go:276] 0 containers: []
	W0717 18:41:35.647571   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:35.647579   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:35.647595   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:35.696664   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:35.696692   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:35.710474   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:35.710499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:35.785569   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:35.785595   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:35.785611   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:35.865750   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:35.865785   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:38.405391   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:38.417737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:38.417806   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:38.453848   80857 cri.go:89] found id: ""
	I0717 18:41:38.453877   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.453888   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:38.453895   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:38.453949   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:38.487083   80857 cri.go:89] found id: ""
	I0717 18:41:38.487112   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.487122   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:38.487129   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:38.487190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:38.517700   80857 cri.go:89] found id: ""
	I0717 18:41:38.517729   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.517738   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:38.517746   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:38.517808   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:38.547587   80857 cri.go:89] found id: ""
	I0717 18:41:38.547616   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.547625   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:38.547632   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:38.547780   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:38.581511   80857 cri.go:89] found id: ""
	I0717 18:41:38.581535   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.581542   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:38.581548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:38.581675   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:38.618308   80857 cri.go:89] found id: ""
	I0717 18:41:38.618327   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.618334   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:38.618340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:38.618401   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:38.658237   80857 cri.go:89] found id: ""
	I0717 18:41:38.658267   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.658278   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:38.658298   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:38.658359   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:38.694044   80857 cri.go:89] found id: ""
	I0717 18:41:38.694071   80857 logs.go:276] 0 containers: []
	W0717 18:41:38.694080   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:38.694090   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:38.694106   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:38.746621   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:38.746658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:38.758781   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:38.758805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:38.827327   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:38.827345   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:38.827357   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:38.899731   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:38.899762   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:40.170106   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:42.170391   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:40.825940   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:43.327489   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:41.097668   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:43.597625   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:41.437479   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:41.451264   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:41.451336   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:41.489053   80857 cri.go:89] found id: ""
	I0717 18:41:41.489083   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.489093   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:41.489101   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:41.489162   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:41.521954   80857 cri.go:89] found id: ""
	I0717 18:41:41.521985   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.521996   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:41.522003   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:41.522068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:41.556847   80857 cri.go:89] found id: ""
	I0717 18:41:41.556875   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.556884   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:41.556893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:41.557024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:41.591232   80857 cri.go:89] found id: ""
	I0717 18:41:41.591255   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.591263   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:41.591269   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:41.591315   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:41.624533   80857 cri.go:89] found id: ""
	I0717 18:41:41.624565   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.624576   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:41.624583   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:41.624644   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:41.656033   80857 cri.go:89] found id: ""
	I0717 18:41:41.656063   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.656073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:41.656080   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:41.656140   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:41.691686   80857 cri.go:89] found id: ""
	I0717 18:41:41.691715   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.691725   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:41.691732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:41.691789   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:41.724688   80857 cri.go:89] found id: ""
	I0717 18:41:41.724718   80857 logs.go:276] 0 containers: []
	W0717 18:41:41.724729   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:41.724741   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:41.724760   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:41.802855   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:41.802882   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:41.839242   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:41.839271   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:41.889028   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:41.889058   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:41.901598   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:41.901627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:41.972632   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.472824   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:44.487673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:44.487745   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:44.530173   80857 cri.go:89] found id: ""
	I0717 18:41:44.530204   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.530216   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:44.530224   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:44.530288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:44.577865   80857 cri.go:89] found id: ""
	I0717 18:41:44.577891   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.577899   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:44.577905   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:44.577967   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:44.621528   80857 cri.go:89] found id: ""
	I0717 18:41:44.621551   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.621559   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:44.621564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:44.621622   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:44.655456   80857 cri.go:89] found id: ""
	I0717 18:41:44.655488   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.655498   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:44.655505   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:44.655570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:44.688729   80857 cri.go:89] found id: ""
	I0717 18:41:44.688757   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.688767   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:44.688774   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:44.688832   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:44.720190   80857 cri.go:89] found id: ""
	I0717 18:41:44.720220   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.720231   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:44.720238   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:44.720294   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:44.750109   80857 cri.go:89] found id: ""
	I0717 18:41:44.750135   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.750142   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:44.750147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:44.750203   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:44.780039   80857 cri.go:89] found id: ""
	I0717 18:41:44.780066   80857 logs.go:276] 0 containers: []
	W0717 18:41:44.780090   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:44.780098   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:44.780111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:44.829641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:44.829675   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:44.842587   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:44.842616   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:44.906331   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:44.906355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:44.906369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:44.983364   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:44.983400   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:44.671557   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:47.170565   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:45.827780   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:48.324627   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:46.096988   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:48.596469   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:47.525057   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:47.538586   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:47.538639   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:47.574805   80857 cri.go:89] found id: ""
	I0717 18:41:47.574832   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.574843   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:47.574849   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:47.574906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:47.609576   80857 cri.go:89] found id: ""
	I0717 18:41:47.609603   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.609611   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:47.609617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:47.609662   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:47.643899   80857 cri.go:89] found id: ""
	I0717 18:41:47.643927   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.643936   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:47.643941   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:47.643990   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:47.680365   80857 cri.go:89] found id: ""
	I0717 18:41:47.680404   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.680412   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:47.680418   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:47.680475   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:47.719038   80857 cri.go:89] found id: ""
	I0717 18:41:47.719061   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.719069   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:47.719074   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:47.719118   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:47.751708   80857 cri.go:89] found id: ""
	I0717 18:41:47.751735   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.751744   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:47.751750   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:47.751807   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:47.789803   80857 cri.go:89] found id: ""
	I0717 18:41:47.789838   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.789850   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:47.789858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:47.789921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:47.821450   80857 cri.go:89] found id: ""
	I0717 18:41:47.821477   80857 logs.go:276] 0 containers: []
	W0717 18:41:47.821487   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:47.821496   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:47.821511   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:47.886501   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:47.886526   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:47.886544   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:47.960142   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:47.960177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:47.995012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:47.995046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:48.046848   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:48.046884   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:49.670208   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:52.169471   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.824876   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:53.324628   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.597215   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:53.096114   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:50.560990   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:50.574906   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:50.575051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:50.607647   80857 cri.go:89] found id: ""
	I0717 18:41:50.607674   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.607687   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:50.607696   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:50.607756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:50.640621   80857 cri.go:89] found id: ""
	I0717 18:41:50.640651   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.640660   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:50.640667   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:50.640741   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:50.675269   80857 cri.go:89] found id: ""
	I0717 18:41:50.675293   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.675303   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:50.675313   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:50.675369   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:50.707915   80857 cri.go:89] found id: ""
	I0717 18:41:50.707938   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.707946   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:50.707951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:50.708006   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:50.741149   80857 cri.go:89] found id: ""
	I0717 18:41:50.741170   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.741178   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:50.741184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:50.741288   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:50.772768   80857 cri.go:89] found id: ""
	I0717 18:41:50.772792   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.772799   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:50.772804   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:50.772854   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:50.804996   80857 cri.go:89] found id: ""
	I0717 18:41:50.805018   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.805028   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:50.805035   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:50.805094   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:50.838933   80857 cri.go:89] found id: ""
	I0717 18:41:50.838960   80857 logs.go:276] 0 containers: []
	W0717 18:41:50.838971   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:50.838982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:50.838997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:50.886415   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:50.886444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:50.899024   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:50.899049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:50.965388   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:50.965416   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:50.965434   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:51.044449   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:51.044490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.580749   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:53.593759   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:53.593841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:53.626541   80857 cri.go:89] found id: ""
	I0717 18:41:53.626573   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.626582   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:53.626588   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:53.626645   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:53.658492   80857 cri.go:89] found id: ""
	I0717 18:41:53.658520   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.658529   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:53.658537   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:53.658600   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:53.694546   80857 cri.go:89] found id: ""
	I0717 18:41:53.694582   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.694590   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:53.694595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:53.694650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:53.727028   80857 cri.go:89] found id: ""
	I0717 18:41:53.727053   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.727061   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:53.727067   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:53.727129   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:53.762869   80857 cri.go:89] found id: ""
	I0717 18:41:53.762897   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.762906   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:53.762913   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:53.762976   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:53.794133   80857 cri.go:89] found id: ""
	I0717 18:41:53.794158   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.794166   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:53.794172   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:53.794225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:53.828432   80857 cri.go:89] found id: ""
	I0717 18:41:53.828463   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.828473   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:53.828484   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:53.828546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:53.863316   80857 cri.go:89] found id: ""
	I0717 18:41:53.863345   80857 logs.go:276] 0 containers: []
	W0717 18:41:53.863353   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:53.863362   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:53.863384   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:53.897353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:53.897380   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:53.944213   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:53.944242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:53.957484   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:53.957509   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:54.025962   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:54.025992   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:54.026006   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:54.170642   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:56.672407   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:55.325017   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:57.823877   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:55.596492   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:58.096397   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:56.609502   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:56.621849   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:56.621913   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:56.657469   80857 cri.go:89] found id: ""
	I0717 18:41:56.657498   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.657510   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:56.657517   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:56.657579   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:56.691298   80857 cri.go:89] found id: ""
	I0717 18:41:56.691320   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.691327   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:56.691332   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:56.691386   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:56.723305   80857 cri.go:89] found id: ""
	I0717 18:41:56.723334   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.723344   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:56.723352   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:56.723417   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:56.755893   80857 cri.go:89] found id: ""
	I0717 18:41:56.755918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.755926   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:56.755931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:56.755982   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:56.787777   80857 cri.go:89] found id: ""
	I0717 18:41:56.787807   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.787819   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:56.787828   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:56.787894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:56.821126   80857 cri.go:89] found id: ""
	I0717 18:41:56.821152   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.821163   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:56.821170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:56.821228   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:56.855894   80857 cri.go:89] found id: ""
	I0717 18:41:56.855918   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.855926   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:56.855931   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:56.855980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:56.893483   80857 cri.go:89] found id: ""
	I0717 18:41:56.893505   80857 logs.go:276] 0 containers: []
	W0717 18:41:56.893512   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:56.893521   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:41:56.893532   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:56.945355   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:41:56.945385   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:41:56.958426   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:41:56.958451   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:41:57.025542   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:41:57.025571   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:57.025585   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:57.100497   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:57.100528   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:41:59.636400   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:41:59.648517   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:41:59.648571   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:41:59.683954   80857 cri.go:89] found id: ""
	I0717 18:41:59.683978   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.683988   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:41:59.683995   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:41:59.684065   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:41:59.719135   80857 cri.go:89] found id: ""
	I0717 18:41:59.719162   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.719172   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:41:59.719179   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:41:59.719243   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:41:59.755980   80857 cri.go:89] found id: ""
	I0717 18:41:59.756012   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.756023   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:41:59.756030   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:41:59.756091   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:41:59.788147   80857 cri.go:89] found id: ""
	I0717 18:41:59.788176   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.788185   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:41:59.788191   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:41:59.788239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:41:59.819646   80857 cri.go:89] found id: ""
	I0717 18:41:59.819670   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.819679   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:41:59.819685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:41:59.819738   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:41:59.852487   80857 cri.go:89] found id: ""
	I0717 18:41:59.852508   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.852516   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:41:59.852521   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:41:59.852586   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:41:59.883761   80857 cri.go:89] found id: ""
	I0717 18:41:59.883794   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.883805   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:41:59.883812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:41:59.883870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:41:59.914854   80857 cri.go:89] found id: ""
	I0717 18:41:59.914882   80857 logs.go:276] 0 containers: []
	W0717 18:41:59.914889   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:41:59.914896   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:41:59.914909   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:41:59.995619   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:41:59.995650   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:00.034444   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:00.034472   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:41:59.172253   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:01.670422   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:41:59.824347   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:01.824444   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:03.826580   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:00.096457   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:02.596587   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:00.084278   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:00.084308   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:00.097771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:00.097796   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:00.161753   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:02.662134   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:02.676200   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:02.676277   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:02.711606   80857 cri.go:89] found id: ""
	I0717 18:42:02.711640   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.711652   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:02.711659   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:02.711711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:02.744704   80857 cri.go:89] found id: ""
	I0717 18:42:02.744728   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.744735   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:02.744741   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:02.744800   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:02.778815   80857 cri.go:89] found id: ""
	I0717 18:42:02.778846   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.778859   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:02.778868   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:02.778936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:02.810896   80857 cri.go:89] found id: ""
	I0717 18:42:02.810928   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.810941   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:02.810950   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:02.811024   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:02.843868   80857 cri.go:89] found id: ""
	I0717 18:42:02.843892   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.843903   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:02.843910   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:02.843972   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:02.876311   80857 cri.go:89] found id: ""
	I0717 18:42:02.876338   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.876348   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:02.876356   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:02.876420   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:02.910752   80857 cri.go:89] found id: ""
	I0717 18:42:02.910776   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.910784   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:02.910789   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:02.910835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:02.947286   80857 cri.go:89] found id: ""
	I0717 18:42:02.947318   80857 logs.go:276] 0 containers: []
	W0717 18:42:02.947328   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:02.947337   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:02.947351   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:02.999512   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:02.999542   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:03.014063   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:03.014094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:03.081822   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:03.081844   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:03.081858   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:03.161088   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:03.161117   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:04.171168   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:06.669508   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:06.324608   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:08.825084   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:04.597129   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:07.098716   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:05.699198   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:05.711597   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:05.711654   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:05.749653   80857 cri.go:89] found id: ""
	I0717 18:42:05.749684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.749694   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:05.749703   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:05.749757   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:05.785095   80857 cri.go:89] found id: ""
	I0717 18:42:05.785118   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.785125   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:05.785134   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:05.785179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:05.818085   80857 cri.go:89] found id: ""
	I0717 18:42:05.818111   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.818119   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:05.818125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:05.818171   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:05.851872   80857 cri.go:89] found id: ""
	I0717 18:42:05.851895   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.851902   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:05.851907   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:05.851958   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:05.883924   80857 cri.go:89] found id: ""
	I0717 18:42:05.883948   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.883958   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:05.883965   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:05.884025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:05.916365   80857 cri.go:89] found id: ""
	I0717 18:42:05.916396   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.916407   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:05.916414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:05.916473   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:05.950656   80857 cri.go:89] found id: ""
	I0717 18:42:05.950684   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.950695   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:05.950701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:05.950762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:05.992132   80857 cri.go:89] found id: ""
	I0717 18:42:05.992160   80857 logs.go:276] 0 containers: []
	W0717 18:42:05.992169   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:05.992177   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:05.992190   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:06.042162   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:06.042192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:06.055594   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:06.055619   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:06.123007   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:06.123038   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:06.123068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:06.200429   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:06.200460   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:08.739039   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:08.751520   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:08.751575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:08.783765   80857 cri.go:89] found id: ""
	I0717 18:42:08.783794   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.783805   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:08.783812   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:08.783864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:08.815200   80857 cri.go:89] found id: ""
	I0717 18:42:08.815227   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.815236   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:08.815242   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:08.815289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:08.848970   80857 cri.go:89] found id: ""
	I0717 18:42:08.849002   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.849012   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:08.849021   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:08.849084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:08.881832   80857 cri.go:89] found id: ""
	I0717 18:42:08.881859   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.881866   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:08.881874   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:08.881922   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:08.913119   80857 cri.go:89] found id: ""
	I0717 18:42:08.913142   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.913149   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:08.913155   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:08.913201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:08.947471   80857 cri.go:89] found id: ""
	I0717 18:42:08.947499   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.947509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:08.947515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:08.947570   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:08.979570   80857 cri.go:89] found id: ""
	I0717 18:42:08.979599   80857 logs.go:276] 0 containers: []
	W0717 18:42:08.979609   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:08.979615   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:08.979670   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:09.012960   80857 cri.go:89] found id: ""
	I0717 18:42:09.012991   80857 logs.go:276] 0 containers: []
	W0717 18:42:09.013002   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:09.013012   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:09.013027   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:09.065732   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:09.065769   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:09.079572   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:09.079602   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:09.151737   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:09.151754   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:09.151766   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:09.230185   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:09.230218   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:08.670185   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:10.671336   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.325340   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:13.824087   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:09.595757   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.596784   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:14.096765   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:11.767189   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:11.780044   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:11.780115   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:11.812700   80857 cri.go:89] found id: ""
	I0717 18:42:11.812722   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.812730   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:11.812736   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:11.812781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:11.846855   80857 cri.go:89] found id: ""
	I0717 18:42:11.846883   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.846893   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:11.846900   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:11.846962   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:11.877671   80857 cri.go:89] found id: ""
	I0717 18:42:11.877700   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.877710   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:11.877716   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:11.877767   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:11.908703   80857 cri.go:89] found id: ""
	I0717 18:42:11.908728   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.908735   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:11.908740   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:11.908786   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:11.942191   80857 cri.go:89] found id: ""
	I0717 18:42:11.942218   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.942225   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:11.942231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:11.942284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:11.974751   80857 cri.go:89] found id: ""
	I0717 18:42:11.974782   80857 logs.go:276] 0 containers: []
	W0717 18:42:11.974798   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:11.974807   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:11.974876   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:12.006287   80857 cri.go:89] found id: ""
	I0717 18:42:12.006317   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.006327   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:12.006335   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:12.006396   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:12.036524   80857 cri.go:89] found id: ""
	I0717 18:42:12.036546   80857 logs.go:276] 0 containers: []
	W0717 18:42:12.036554   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:12.036575   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:12.036599   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:12.085073   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:12.085109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:12.098908   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:12.098937   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:12.161665   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:12.161687   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:12.161702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:12.240349   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:12.240401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:14.781101   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:14.794081   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:14.794149   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:14.828975   80857 cri.go:89] found id: ""
	I0717 18:42:14.829003   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.829013   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:14.829021   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:14.829072   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:14.864858   80857 cri.go:89] found id: ""
	I0717 18:42:14.864886   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.864896   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:14.864903   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:14.864986   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:14.897961   80857 cri.go:89] found id: ""
	I0717 18:42:14.897983   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.897991   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:14.897996   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:14.898041   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:14.935499   80857 cri.go:89] found id: ""
	I0717 18:42:14.935521   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.935529   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:14.935534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:14.935591   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:14.967581   80857 cri.go:89] found id: ""
	I0717 18:42:14.967605   80857 logs.go:276] 0 containers: []
	W0717 18:42:14.967621   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:14.967629   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:14.967688   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:15.001844   80857 cri.go:89] found id: ""
	I0717 18:42:15.001876   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.001888   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:15.001894   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:15.001942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:15.038940   80857 cri.go:89] found id: ""
	I0717 18:42:15.038967   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.038977   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:15.038985   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:15.039043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:13.170111   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:15.669712   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:17.669916   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:16.325511   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:18.823820   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:16.597587   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:19.096905   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:15.072636   80857 cri.go:89] found id: ""
	I0717 18:42:15.072665   80857 logs.go:276] 0 containers: []
	W0717 18:42:15.072677   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:15.072688   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:15.072703   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:15.124889   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:15.124934   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:15.138661   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:15.138691   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:15.208762   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:15.208791   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:15.208806   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:15.281302   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:15.281336   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:17.817136   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:17.831013   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:17.831078   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:17.867065   80857 cri.go:89] found id: ""
	I0717 18:42:17.867091   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.867101   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:17.867108   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:17.867166   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:17.904143   80857 cri.go:89] found id: ""
	I0717 18:42:17.904171   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.904180   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:17.904188   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:17.904248   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:17.937450   80857 cri.go:89] found id: ""
	I0717 18:42:17.937478   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.937487   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:17.937492   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:17.937556   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:17.970650   80857 cri.go:89] found id: ""
	I0717 18:42:17.970679   80857 logs.go:276] 0 containers: []
	W0717 18:42:17.970689   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:17.970696   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:17.970754   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:18.002329   80857 cri.go:89] found id: ""
	I0717 18:42:18.002355   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.002364   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:18.002371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:18.002430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:18.035253   80857 cri.go:89] found id: ""
	I0717 18:42:18.035278   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.035288   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:18.035295   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:18.035356   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:18.070386   80857 cri.go:89] found id: ""
	I0717 18:42:18.070419   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.070431   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:18.070439   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:18.070507   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:18.106148   80857 cri.go:89] found id: ""
	I0717 18:42:18.106170   80857 logs.go:276] 0 containers: []
	W0717 18:42:18.106177   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:18.106185   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:18.106201   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:18.157359   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:18.157390   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:18.171757   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:18.171782   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:18.242795   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:18.242818   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:18.242831   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:18.316221   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:18.316255   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:19.670562   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:22.171111   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:20.824266   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:22.824366   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:21.596773   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:24.098051   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:20.857953   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:20.870813   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:20.870882   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:20.906033   80857 cri.go:89] found id: ""
	I0717 18:42:20.906065   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.906075   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:20.906083   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:20.906142   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:20.942292   80857 cri.go:89] found id: ""
	I0717 18:42:20.942316   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.942335   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:20.942342   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:20.942403   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:20.985113   80857 cri.go:89] found id: ""
	I0717 18:42:20.985143   80857 logs.go:276] 0 containers: []
	W0717 18:42:20.985151   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:20.985157   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:20.985217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:21.021807   80857 cri.go:89] found id: ""
	I0717 18:42:21.021834   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.021842   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:21.021847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:21.021906   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:21.061924   80857 cri.go:89] found id: ""
	I0717 18:42:21.061949   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.061961   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:21.061969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:21.062025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:21.098890   80857 cri.go:89] found id: ""
	I0717 18:42:21.098916   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.098927   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:21.098935   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:21.098991   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:21.132576   80857 cri.go:89] found id: ""
	I0717 18:42:21.132612   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.132621   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:21.132627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:21.132687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:21.167723   80857 cri.go:89] found id: ""
	I0717 18:42:21.167765   80857 logs.go:276] 0 containers: []
	W0717 18:42:21.167778   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:21.167788   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:21.167803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:21.220427   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:21.220461   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:21.233191   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:21.233216   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:21.304462   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:21.304481   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:21.304498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:21.386887   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:21.386925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:23.926518   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:23.940470   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:23.940534   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:23.976739   80857 cri.go:89] found id: ""
	I0717 18:42:23.976763   80857 logs.go:276] 0 containers: []
	W0717 18:42:23.976773   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:23.976778   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:23.976838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:24.007575   80857 cri.go:89] found id: ""
	I0717 18:42:24.007603   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.007612   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:24.007617   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:24.007671   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:24.040430   80857 cri.go:89] found id: ""
	I0717 18:42:24.040455   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.040463   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:24.040468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:24.040581   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:24.071602   80857 cri.go:89] found id: ""
	I0717 18:42:24.071629   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.071638   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:24.071644   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:24.071705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:24.109570   80857 cri.go:89] found id: ""
	I0717 18:42:24.109595   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.109602   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:24.109607   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:24.109667   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:24.144284   80857 cri.go:89] found id: ""
	I0717 18:42:24.144305   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.144328   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:24.144333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:24.144382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:24.179441   80857 cri.go:89] found id: ""
	I0717 18:42:24.179467   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.179474   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:24.179479   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:24.179545   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:24.222100   80857 cri.go:89] found id: ""
	I0717 18:42:24.222133   80857 logs.go:276] 0 containers: []
	W0717 18:42:24.222143   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:24.222159   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:24.222175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:24.273181   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:24.273215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:24.285835   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:24.285861   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:24.357804   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:24.357826   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:24.357839   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:24.437270   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:24.437310   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:24.670033   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.671014   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:24.824543   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:27.325296   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.597795   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:29.098055   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:26.979543   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:26.992443   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:26.992497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:27.025520   80857 cri.go:89] found id: ""
	I0717 18:42:27.025548   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.025560   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:27.025567   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:27.025630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:27.059971   80857 cri.go:89] found id: ""
	I0717 18:42:27.060002   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.060011   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:27.060016   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:27.060068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:27.091370   80857 cri.go:89] found id: ""
	I0717 18:42:27.091397   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.091407   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:27.091415   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:27.091468   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:27.123736   80857 cri.go:89] found id: ""
	I0717 18:42:27.123768   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.123779   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:27.123786   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:27.123849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:27.156155   80857 cri.go:89] found id: ""
	I0717 18:42:27.156177   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.156185   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:27.156190   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:27.156239   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:27.190701   80857 cri.go:89] found id: ""
	I0717 18:42:27.190729   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.190741   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:27.190749   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:27.190825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:27.222093   80857 cri.go:89] found id: ""
	I0717 18:42:27.222119   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.222130   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:27.222137   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:27.222199   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:27.258789   80857 cri.go:89] found id: ""
	I0717 18:42:27.258813   80857 logs.go:276] 0 containers: []
	W0717 18:42:27.258824   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:27.258834   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:27.258848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:27.307033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:27.307068   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:27.321181   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:27.321209   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:27.390560   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:27.390593   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:27.390613   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:27.464352   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:27.464389   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:30.005732   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:30.019088   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:30.019160   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:29.170578   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.670221   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:29.327610   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.824292   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:33.824392   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:31.595937   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:33.597622   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:30.052733   80857 cri.go:89] found id: ""
	I0717 18:42:30.052757   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.052765   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:30.052775   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:30.052836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:30.087683   80857 cri.go:89] found id: ""
	I0717 18:42:30.087711   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.087722   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:30.087729   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:30.087774   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:30.124371   80857 cri.go:89] found id: ""
	I0717 18:42:30.124404   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.124416   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:30.124432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:30.124487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:30.160081   80857 cri.go:89] found id: ""
	I0717 18:42:30.160107   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.160115   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:30.160122   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:30.160173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:30.194420   80857 cri.go:89] found id: ""
	I0717 18:42:30.194447   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.194456   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:30.194464   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:30.194522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:30.229544   80857 cri.go:89] found id: ""
	I0717 18:42:30.229570   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.229584   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:30.229591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:30.229650   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:30.264164   80857 cri.go:89] found id: ""
	I0717 18:42:30.264193   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.264204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:30.264211   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:30.264266   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:30.296958   80857 cri.go:89] found id: ""
	I0717 18:42:30.296986   80857 logs.go:276] 0 containers: []
	W0717 18:42:30.296996   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:30.297008   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:30.297049   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:30.348116   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:30.348145   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:30.361373   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:30.361401   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:30.429601   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:30.429620   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:30.429634   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:30.507718   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:30.507752   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:33.045539   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:33.058149   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:33.058219   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:33.088675   80857 cri.go:89] found id: ""
	I0717 18:42:33.088702   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.088710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:33.088717   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:33.088773   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:33.121269   80857 cri.go:89] found id: ""
	I0717 18:42:33.121297   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.121308   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:33.121315   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:33.121375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:33.156144   80857 cri.go:89] found id: ""
	I0717 18:42:33.156173   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.156184   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:33.156192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:33.156257   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:33.188559   80857 cri.go:89] found id: ""
	I0717 18:42:33.188585   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.188597   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:33.188603   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:33.188651   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:33.219650   80857 cri.go:89] found id: ""
	I0717 18:42:33.219672   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.219680   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:33.219686   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:33.219746   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:33.249704   80857 cri.go:89] found id: ""
	I0717 18:42:33.249728   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.249737   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:33.249742   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:33.249793   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:33.283480   80857 cri.go:89] found id: ""
	I0717 18:42:33.283503   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.283511   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:33.283516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:33.283560   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:33.314577   80857 cri.go:89] found id: ""
	I0717 18:42:33.314620   80857 logs.go:276] 0 containers: []
	W0717 18:42:33.314629   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:33.314638   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:33.314649   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:33.363458   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:33.363491   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:33.377240   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:33.377267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:33.442939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:33.442961   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:33.442976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:33.522422   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:33.522456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:34.170638   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.171034   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.324780   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:38.824832   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.097788   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:38.596054   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:36.063823   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:36.078272   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:36.078342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:36.111460   80857 cri.go:89] found id: ""
	I0717 18:42:36.111494   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.111502   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:36.111509   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:36.111562   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:36.144191   80857 cri.go:89] found id: ""
	I0717 18:42:36.144222   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.144232   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:36.144239   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:36.144306   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:36.177247   80857 cri.go:89] found id: ""
	I0717 18:42:36.177277   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.177288   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:36.177294   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:36.177350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:36.213390   80857 cri.go:89] found id: ""
	I0717 18:42:36.213419   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.213427   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:36.213433   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:36.213493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:36.246775   80857 cri.go:89] found id: ""
	I0717 18:42:36.246799   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.246807   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:36.246812   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:36.246870   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:36.282441   80857 cri.go:89] found id: ""
	I0717 18:42:36.282463   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.282470   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:36.282476   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:36.282529   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:36.314178   80857 cri.go:89] found id: ""
	I0717 18:42:36.314203   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.314211   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:36.314216   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:36.314265   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:36.353705   80857 cri.go:89] found id: ""
	I0717 18:42:36.353730   80857 logs.go:276] 0 containers: []
	W0717 18:42:36.353737   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:36.353746   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:36.353758   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:36.370866   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:36.370894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:36.463660   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:36.463693   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:36.463710   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:36.540337   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:36.540371   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:36.575770   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:36.575801   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.128675   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:39.141187   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:39.141255   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:39.175960   80857 cri.go:89] found id: ""
	I0717 18:42:39.175982   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.175989   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:39.175994   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:39.176051   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:39.209442   80857 cri.go:89] found id: ""
	I0717 18:42:39.209472   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.209483   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:39.209490   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:39.209552   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:39.243225   80857 cri.go:89] found id: ""
	I0717 18:42:39.243249   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.243256   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:39.243262   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:39.243309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:39.277369   80857 cri.go:89] found id: ""
	I0717 18:42:39.277396   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.277407   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:39.277414   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:39.277464   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:39.310522   80857 cri.go:89] found id: ""
	I0717 18:42:39.310552   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.310563   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:39.310570   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:39.310637   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:39.344186   80857 cri.go:89] found id: ""
	I0717 18:42:39.344208   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.344216   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:39.344221   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:39.344279   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:39.375329   80857 cri.go:89] found id: ""
	I0717 18:42:39.375354   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.375366   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:39.375372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:39.375419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:39.412629   80857 cri.go:89] found id: ""
	I0717 18:42:39.412659   80857 logs.go:276] 0 containers: []
	W0717 18:42:39.412668   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:39.412679   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:39.412696   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:39.447607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:39.447644   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:39.498981   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:39.499013   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:39.512380   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:39.512409   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:39.580396   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:39.580415   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:39.580428   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:38.670213   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:41.170284   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:40.825257   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:43.324155   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:40.596267   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:42.597199   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:42.158145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:42.177450   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:42.177522   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:42.222849   80857 cri.go:89] found id: ""
	I0717 18:42:42.222880   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.222890   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:42.222897   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:42.222954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:42.252712   80857 cri.go:89] found id: ""
	I0717 18:42:42.252742   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.252752   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:42.252757   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:42.252802   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:42.283764   80857 cri.go:89] found id: ""
	I0717 18:42:42.283789   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.283799   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:42.283806   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:42.283864   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:42.317243   80857 cri.go:89] found id: ""
	I0717 18:42:42.317270   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.317281   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:42.317288   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:42.317350   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:42.349972   80857 cri.go:89] found id: ""
	I0717 18:42:42.350000   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.350010   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:42.350017   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:42.350074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:42.382111   80857 cri.go:89] found id: ""
	I0717 18:42:42.382146   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.382158   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:42.382165   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:42.382223   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:42.414669   80857 cri.go:89] found id: ""
	I0717 18:42:42.414692   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.414700   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:42.414705   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:42.414765   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:42.446533   80857 cri.go:89] found id: ""
	I0717 18:42:42.446571   80857 logs.go:276] 0 containers: []
	W0717 18:42:42.446579   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:42.446588   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:42.446603   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:42.522142   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:42.522165   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:42.522177   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:42.602456   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:42.602493   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:42.642192   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:42.642221   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:42.695016   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:42.695046   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:43.170955   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.670631   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.325626   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:47.824543   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.097244   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:47.097783   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:45.208310   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:45.221821   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:45.221901   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:45.256887   80857 cri.go:89] found id: ""
	I0717 18:42:45.256914   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.256924   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:45.256930   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:45.256999   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:45.293713   80857 cri.go:89] found id: ""
	I0717 18:42:45.293735   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.293748   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:45.293753   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:45.293799   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:45.328790   80857 cri.go:89] found id: ""
	I0717 18:42:45.328815   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.328824   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:45.328833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:45.328880   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:45.364977   80857 cri.go:89] found id: ""
	I0717 18:42:45.365004   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.365014   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:45.365022   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:45.365084   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:45.401131   80857 cri.go:89] found id: ""
	I0717 18:42:45.401157   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.401164   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:45.401170   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:45.401217   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:45.432252   80857 cri.go:89] found id: ""
	I0717 18:42:45.432279   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.432287   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:45.432293   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:45.432338   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:45.464636   80857 cri.go:89] found id: ""
	I0717 18:42:45.464659   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.464667   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:45.464674   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:45.464728   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:45.494884   80857 cri.go:89] found id: ""
	I0717 18:42:45.494913   80857 logs.go:276] 0 containers: []
	W0717 18:42:45.494924   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:45.494935   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:45.494949   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:45.546578   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:45.546610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:45.559622   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:45.559647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:45.622094   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:45.622114   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:45.622126   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:45.699772   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:45.699814   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.241667   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:48.254205   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:48.254270   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:48.293258   80857 cri.go:89] found id: ""
	I0717 18:42:48.293287   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.293298   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:48.293305   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:48.293362   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:48.328778   80857 cri.go:89] found id: ""
	I0717 18:42:48.328807   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.328818   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:48.328824   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:48.328884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:48.360230   80857 cri.go:89] found id: ""
	I0717 18:42:48.360256   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.360266   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:48.360276   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:48.360335   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:48.397770   80857 cri.go:89] found id: ""
	I0717 18:42:48.397797   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.397808   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:48.397815   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:48.397873   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:48.430912   80857 cri.go:89] found id: ""
	I0717 18:42:48.430938   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.430946   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:48.430956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:48.431015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:48.462659   80857 cri.go:89] found id: ""
	I0717 18:42:48.462688   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.462699   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:48.462706   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:48.462771   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:48.497554   80857 cri.go:89] found id: ""
	I0717 18:42:48.497584   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.497594   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:48.497601   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:48.497665   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:48.529524   80857 cri.go:89] found id: ""
	I0717 18:42:48.529547   80857 logs.go:276] 0 containers: []
	W0717 18:42:48.529555   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:48.529564   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:48.529577   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:48.601265   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:48.601285   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:48.601297   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:48.678045   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:48.678075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:48.718565   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:48.718598   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:48.769923   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:48.769956   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:48.169777   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:50.669643   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.670334   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:50.324997   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.824163   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:49.596927   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:52.097602   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:51.282887   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:51.295778   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:51.295848   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:51.329324   80857 cri.go:89] found id: ""
	I0717 18:42:51.329351   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.329361   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:51.329369   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:51.329434   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:51.362013   80857 cri.go:89] found id: ""
	I0717 18:42:51.362042   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.362052   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:51.362059   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:51.362120   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:51.395039   80857 cri.go:89] found id: ""
	I0717 18:42:51.395069   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.395080   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:51.395087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:51.395155   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:51.427683   80857 cri.go:89] found id: ""
	I0717 18:42:51.427709   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.427717   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:51.427722   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:51.427772   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:51.461683   80857 cri.go:89] found id: ""
	I0717 18:42:51.461706   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.461718   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:51.461723   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:51.461769   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:51.495780   80857 cri.go:89] found id: ""
	I0717 18:42:51.495802   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.495810   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:51.495816   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:51.495867   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:51.527541   80857 cri.go:89] found id: ""
	I0717 18:42:51.527573   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.527583   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:51.527591   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:51.527648   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:51.567947   80857 cri.go:89] found id: ""
	I0717 18:42:51.567975   80857 logs.go:276] 0 containers: []
	W0717 18:42:51.567987   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:51.567997   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:51.568014   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:51.620083   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:51.620109   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:51.632823   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:51.632848   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:51.705731   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:51.705753   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:51.705767   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:51.781969   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:51.782005   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.318011   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:54.331886   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:54.331942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:54.362935   80857 cri.go:89] found id: ""
	I0717 18:42:54.362962   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.362972   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:54.362979   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:54.363032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:54.396153   80857 cri.go:89] found id: ""
	I0717 18:42:54.396180   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.396191   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:54.396198   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:54.396259   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:54.433123   80857 cri.go:89] found id: ""
	I0717 18:42:54.433150   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.433160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:54.433168   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:54.433224   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:54.465034   80857 cri.go:89] found id: ""
	I0717 18:42:54.465064   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.465079   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:54.465087   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:54.465200   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:54.496200   80857 cri.go:89] found id: ""
	I0717 18:42:54.496250   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.496263   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:54.496271   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:54.496332   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:54.528618   80857 cri.go:89] found id: ""
	I0717 18:42:54.528646   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.528656   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:54.528664   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:54.528724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:54.563018   80857 cri.go:89] found id: ""
	I0717 18:42:54.563042   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.563052   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:54.563059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:54.563114   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:54.595221   80857 cri.go:89] found id: ""
	I0717 18:42:54.595256   80857 logs.go:276] 0 containers: []
	W0717 18:42:54.595266   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:54.595275   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:54.595291   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:54.608193   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:54.608220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:54.673755   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:54.673778   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:54.673793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:54.756443   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:54.756483   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:54.792670   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:54.792700   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:55.169224   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.169851   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:54.824614   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.324611   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:54.596824   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:56.597638   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:59.096992   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:57.344637   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:42:57.357003   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:42:57.357068   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:42:57.389230   80857 cri.go:89] found id: ""
	I0717 18:42:57.389261   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.389271   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:42:57.389278   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:42:57.389372   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:42:57.421529   80857 cri.go:89] found id: ""
	I0717 18:42:57.421553   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.421571   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:42:57.421578   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:42:57.421642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:42:57.455154   80857 cri.go:89] found id: ""
	I0717 18:42:57.455186   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.455193   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:42:57.455199   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:42:57.455245   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:42:57.490576   80857 cri.go:89] found id: ""
	I0717 18:42:57.490608   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.490621   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:42:57.490630   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:42:57.490693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:42:57.523972   80857 cri.go:89] found id: ""
	I0717 18:42:57.524010   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.524023   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:42:57.524033   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:42:57.524092   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:42:57.558106   80857 cri.go:89] found id: ""
	I0717 18:42:57.558132   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.558140   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:42:57.558145   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:42:57.558201   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:42:57.591009   80857 cri.go:89] found id: ""
	I0717 18:42:57.591035   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.591045   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:42:57.591051   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:42:57.591110   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:42:57.624564   80857 cri.go:89] found id: ""
	I0717 18:42:57.624592   80857 logs.go:276] 0 containers: []
	W0717 18:42:57.624601   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:42:57.624612   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:42:57.624627   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:42:57.699833   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:42:57.699868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:42:57.737029   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:42:57.737066   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:42:57.790562   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:42:57.790605   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:42:57.804935   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:42:57.804984   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:42:57.873081   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:42:59.170203   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.170348   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:42:59.325020   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.824876   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:03.825020   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:01.596885   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:03.597698   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:00.374166   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:00.388370   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:00.388443   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:00.421228   80857 cri.go:89] found id: ""
	I0717 18:43:00.421257   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.421268   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:00.421276   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:00.421325   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:00.451819   80857 cri.go:89] found id: ""
	I0717 18:43:00.451846   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.451856   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:00.451862   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:00.451917   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:00.482960   80857 cri.go:89] found id: ""
	I0717 18:43:00.482993   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.483004   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:00.483015   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:00.483074   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:00.515860   80857 cri.go:89] found id: ""
	I0717 18:43:00.515882   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.515892   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:00.515899   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:00.515954   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:00.548177   80857 cri.go:89] found id: ""
	I0717 18:43:00.548202   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.548212   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:00.548217   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:00.548275   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:00.580759   80857 cri.go:89] found id: ""
	I0717 18:43:00.580782   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.580790   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:00.580795   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:00.580847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:00.618661   80857 cri.go:89] found id: ""
	I0717 18:43:00.618683   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.618691   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:00.618699   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:00.618742   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:00.650503   80857 cri.go:89] found id: ""
	I0717 18:43:00.650528   80857 logs.go:276] 0 containers: []
	W0717 18:43:00.650535   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:00.650544   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:00.650555   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:00.699668   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:00.699697   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:00.714086   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:00.714114   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:00.777051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:00.777087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:00.777105   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:00.859238   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:00.859274   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:03.399050   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:03.412565   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:03.412626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:03.445993   80857 cri.go:89] found id: ""
	I0717 18:43:03.446026   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.446038   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:03.446045   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:03.446101   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:03.481251   80857 cri.go:89] found id: ""
	I0717 18:43:03.481285   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.481297   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:03.481305   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:03.481371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:03.514406   80857 cri.go:89] found id: ""
	I0717 18:43:03.514433   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.514441   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:03.514447   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:03.514497   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:03.546217   80857 cri.go:89] found id: ""
	I0717 18:43:03.546248   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.546258   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:03.546266   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:03.546327   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:03.577287   80857 cri.go:89] found id: ""
	I0717 18:43:03.577318   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.577333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:03.577340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:03.577394   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:03.610080   80857 cri.go:89] found id: ""
	I0717 18:43:03.610101   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.610109   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:03.610114   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:03.610159   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:03.643753   80857 cri.go:89] found id: ""
	I0717 18:43:03.643777   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.643787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:03.643792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:03.643849   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:03.676290   80857 cri.go:89] found id: ""
	I0717 18:43:03.676338   80857 logs.go:276] 0 containers: []
	W0717 18:43:03.676345   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:03.676353   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:03.676364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:03.727818   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:03.727850   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:03.740752   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:03.740784   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:03.810465   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:03.810485   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:03.810499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:03.889326   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:03.889359   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:03.170473   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:05.170754   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:07.172145   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.323855   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:08.325019   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.096213   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:08.096443   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:06.426949   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:06.440007   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:06.440079   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:06.471689   80857 cri.go:89] found id: ""
	I0717 18:43:06.471715   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.471724   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:06.471729   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:06.471775   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:06.503818   80857 cri.go:89] found id: ""
	I0717 18:43:06.503840   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.503847   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:06.503853   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:06.503900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:06.534733   80857 cri.go:89] found id: ""
	I0717 18:43:06.534755   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.534763   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:06.534768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:06.534818   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:06.565388   80857 cri.go:89] found id: ""
	I0717 18:43:06.565414   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.565421   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:06.565431   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:06.565480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:06.597739   80857 cri.go:89] found id: ""
	I0717 18:43:06.597764   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.597775   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:06.597782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:06.597847   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:06.629823   80857 cri.go:89] found id: ""
	I0717 18:43:06.629845   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.629853   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:06.629859   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:06.629921   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:06.663753   80857 cri.go:89] found id: ""
	I0717 18:43:06.663779   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.663787   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:06.663792   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:06.663838   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:06.700868   80857 cri.go:89] found id: ""
	I0717 18:43:06.700896   80857 logs.go:276] 0 containers: []
	W0717 18:43:06.700906   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:06.700917   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:06.700932   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:06.753064   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:06.753097   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:06.765845   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:06.765868   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:06.834691   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:06.834715   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:06.834729   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:06.908650   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:06.908682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:09.450804   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:09.463369   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:09.463452   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:09.506992   80857 cri.go:89] found id: ""
	I0717 18:43:09.507020   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.507028   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:09.507035   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:09.507093   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:09.543083   80857 cri.go:89] found id: ""
	I0717 18:43:09.543108   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.543116   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:09.543121   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:09.543174   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:09.576194   80857 cri.go:89] found id: ""
	I0717 18:43:09.576219   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.576226   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:09.576231   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:09.576289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:09.610148   80857 cri.go:89] found id: ""
	I0717 18:43:09.610171   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.610178   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:09.610184   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:09.610258   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:09.642217   80857 cri.go:89] found id: ""
	I0717 18:43:09.642246   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.642255   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:09.642263   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:09.642342   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:09.678041   80857 cri.go:89] found id: ""
	I0717 18:43:09.678064   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.678073   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:09.678079   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:09.678141   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:09.711162   80857 cri.go:89] found id: ""
	I0717 18:43:09.711193   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.711204   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:09.711212   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:09.711272   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:09.746135   80857 cri.go:89] found id: ""
	I0717 18:43:09.746164   80857 logs.go:276] 0 containers: []
	W0717 18:43:09.746175   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:09.746186   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:09.746197   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:09.799268   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:09.799303   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:09.811910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:09.811935   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:09.876939   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:09.876982   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:09.876998   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:09.951468   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:09.951502   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:09.671086   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.170273   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:10.823628   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.824485   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:10.597216   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:13.096347   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:12.488926   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:12.501054   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:12.501112   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:12.532536   80857 cri.go:89] found id: ""
	I0717 18:43:12.532569   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.532577   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:12.532582   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:12.532629   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:12.565102   80857 cri.go:89] found id: ""
	I0717 18:43:12.565130   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.565141   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:12.565148   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:12.565208   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:12.600262   80857 cri.go:89] found id: ""
	I0717 18:43:12.600299   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.600309   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:12.600316   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:12.600366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:12.633950   80857 cri.go:89] found id: ""
	I0717 18:43:12.633980   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.633991   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:12.633998   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:12.634054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:12.673297   80857 cri.go:89] found id: ""
	I0717 18:43:12.673325   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.673338   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:12.673345   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:12.673406   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:12.707112   80857 cri.go:89] found id: ""
	I0717 18:43:12.707136   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.707144   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:12.707150   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:12.707206   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:12.746323   80857 cri.go:89] found id: ""
	I0717 18:43:12.746348   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.746358   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:12.746372   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:12.746433   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:12.779470   80857 cri.go:89] found id: ""
	I0717 18:43:12.779496   80857 logs.go:276] 0 containers: []
	W0717 18:43:12.779507   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:12.779518   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:12.779534   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:12.830156   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:12.830178   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:12.843707   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:12.843734   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:12.911849   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:12.911875   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:12.911891   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:12.986090   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:12.986122   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:14.170350   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:16.670284   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:14.824727   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:17.324146   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:15.096736   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:17.596689   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:15.523428   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:15.536012   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:15.536070   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:15.569179   80857 cri.go:89] found id: ""
	I0717 18:43:15.569208   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.569218   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:15.569225   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:15.569273   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:15.606727   80857 cri.go:89] found id: ""
	I0717 18:43:15.606749   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.606757   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:15.606763   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:15.606805   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:15.638842   80857 cri.go:89] found id: ""
	I0717 18:43:15.638873   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.638883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:15.638889   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:15.638939   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:15.671418   80857 cri.go:89] found id: ""
	I0717 18:43:15.671444   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.671453   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:15.671459   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:15.671517   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:15.704892   80857 cri.go:89] found id: ""
	I0717 18:43:15.704928   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.704937   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:15.704956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:15.705013   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:15.738478   80857 cri.go:89] found id: ""
	I0717 18:43:15.738502   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.738509   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:15.738515   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:15.738584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:15.771188   80857 cri.go:89] found id: ""
	I0717 18:43:15.771225   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.771237   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:15.771245   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:15.771303   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:15.807737   80857 cri.go:89] found id: ""
	I0717 18:43:15.807763   80857 logs.go:276] 0 containers: []
	W0717 18:43:15.807770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:15.807779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:15.807790   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:15.861202   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:15.861234   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:15.874170   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:15.874200   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:15.938049   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:15.938073   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:15.938086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:16.025420   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:16.025456   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:18.563320   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:18.575574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:18.575634   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:18.608673   80857 cri.go:89] found id: ""
	I0717 18:43:18.608700   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.608710   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:18.608718   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:18.608782   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:18.641589   80857 cri.go:89] found id: ""
	I0717 18:43:18.641611   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.641618   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:18.641624   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:18.641679   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:18.672232   80857 cri.go:89] found id: ""
	I0717 18:43:18.672258   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.672268   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:18.672274   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:18.672331   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:18.706088   80857 cri.go:89] found id: ""
	I0717 18:43:18.706111   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.706118   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:18.706134   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:18.706179   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:18.742475   80857 cri.go:89] found id: ""
	I0717 18:43:18.742503   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.742512   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:18.742518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:18.742575   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:18.774141   80857 cri.go:89] found id: ""
	I0717 18:43:18.774169   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.774178   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:18.774183   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:18.774234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:18.806648   80857 cri.go:89] found id: ""
	I0717 18:43:18.806672   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.806679   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:18.806685   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:18.806731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:18.838022   80857 cri.go:89] found id: ""
	I0717 18:43:18.838047   80857 logs.go:276] 0 containers: []
	W0717 18:43:18.838054   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:18.838062   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:18.838076   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:18.903467   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:18.903487   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:18.903498   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:18.980385   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:18.980432   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:19.020884   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:19.020914   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:19.073530   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:19.073574   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:19.169841   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.172793   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:19.824764   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.826081   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:20.095275   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:22.097120   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:21.587870   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:21.602130   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:21.602185   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:21.635373   80857 cri.go:89] found id: ""
	I0717 18:43:21.635401   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.635411   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:21.635418   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:21.635480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:21.667175   80857 cri.go:89] found id: ""
	I0717 18:43:21.667200   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.667209   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:21.667216   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:21.667267   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:21.705876   80857 cri.go:89] found id: ""
	I0717 18:43:21.705907   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.705918   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:21.705926   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:21.705988   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:21.753302   80857 cri.go:89] found id: ""
	I0717 18:43:21.753323   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.753330   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:21.753337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:21.753388   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:21.785363   80857 cri.go:89] found id: ""
	I0717 18:43:21.785390   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.785396   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:21.785402   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:21.785448   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:21.817517   80857 cri.go:89] found id: ""
	I0717 18:43:21.817545   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.817553   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:21.817560   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:21.817615   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:21.849451   80857 cri.go:89] found id: ""
	I0717 18:43:21.849478   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.849489   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:21.849497   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:21.849553   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:21.880032   80857 cri.go:89] found id: ""
	I0717 18:43:21.880055   80857 logs.go:276] 0 containers: []
	W0717 18:43:21.880063   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:21.880073   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:21.880086   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:21.928498   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:21.928530   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:21.941532   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:21.941565   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:22.014044   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:22.014066   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:22.014081   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:22.090789   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:22.090817   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:24.628401   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:24.643571   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:24.643642   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:24.679262   80857 cri.go:89] found id: ""
	I0717 18:43:24.679288   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.679297   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:24.679303   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:24.679360   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:24.713043   80857 cri.go:89] found id: ""
	I0717 18:43:24.713073   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.713085   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:24.713092   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:24.713145   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:24.751459   80857 cri.go:89] found id: ""
	I0717 18:43:24.751496   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.751508   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:24.751518   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:24.751584   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:24.790793   80857 cri.go:89] found id: ""
	I0717 18:43:24.790820   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.790831   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:24.790838   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:24.790895   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:24.822909   80857 cri.go:89] found id: ""
	I0717 18:43:24.822936   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.822945   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:24.822953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:24.823016   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:24.855369   80857 cri.go:89] found id: ""
	I0717 18:43:24.855418   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.855455   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:24.855468   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:24.855557   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:24.891080   80857 cri.go:89] found id: ""
	I0717 18:43:24.891110   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.891127   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:24.891133   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:24.891187   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:24.923679   80857 cri.go:89] found id: ""
	I0717 18:43:24.923812   80857 logs.go:276] 0 containers: []
	W0717 18:43:24.923833   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:24.923847   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:24.923863   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:24.975469   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:24.975499   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:24.988671   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:24.988702   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 18:43:23.670616   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.171013   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:24.323858   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.324395   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:28.325125   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:24.596495   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:26.597134   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:29.096334   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	W0717 18:43:25.055191   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:25.055210   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:25.055223   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:25.138867   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:25.138900   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:27.678822   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:27.691422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:27.691483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:27.723979   80857 cri.go:89] found id: ""
	I0717 18:43:27.724008   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.724016   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:27.724022   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:27.724067   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:27.756389   80857 cri.go:89] found id: ""
	I0717 18:43:27.756415   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.756423   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:27.756429   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:27.756476   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:27.787617   80857 cri.go:89] found id: ""
	I0717 18:43:27.787644   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.787652   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:27.787658   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:27.787705   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:27.821688   80857 cri.go:89] found id: ""
	I0717 18:43:27.821716   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.821725   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:27.821732   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:27.821787   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:27.855353   80857 cri.go:89] found id: ""
	I0717 18:43:27.855378   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.855386   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:27.855392   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:27.855439   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:27.887885   80857 cri.go:89] found id: ""
	I0717 18:43:27.887909   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.887917   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:27.887923   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:27.887984   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:27.918797   80857 cri.go:89] found id: ""
	I0717 18:43:27.918820   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.918828   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:27.918833   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:27.918884   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:27.951255   80857 cri.go:89] found id: ""
	I0717 18:43:27.951283   80857 logs.go:276] 0 containers: []
	W0717 18:43:27.951295   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:27.951306   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:27.951319   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:28.025476   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:28.025506   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:28.063994   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:28.064020   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:28.117762   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:28.117805   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:28.135688   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:28.135725   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:28.238770   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:28.172438   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.670703   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:32.674896   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.824443   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:33.324216   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:31.595533   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:33.597968   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:30.739930   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:30.754147   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:30.754231   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:30.794454   80857 cri.go:89] found id: ""
	I0717 18:43:30.794479   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.794486   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:30.794491   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:30.794548   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:30.831643   80857 cri.go:89] found id: ""
	I0717 18:43:30.831666   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.831673   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:30.831678   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:30.831731   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:30.863293   80857 cri.go:89] found id: ""
	I0717 18:43:30.863315   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.863323   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:30.863337   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:30.863395   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:30.897830   80857 cri.go:89] found id: ""
	I0717 18:43:30.897859   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.897870   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:30.897877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:30.897929   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:30.933179   80857 cri.go:89] found id: ""
	I0717 18:43:30.933209   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.933220   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:30.933227   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:30.933289   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:30.964730   80857 cri.go:89] found id: ""
	I0717 18:43:30.964759   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.964773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:30.964781   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:30.964825   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:30.996330   80857 cri.go:89] found id: ""
	I0717 18:43:30.996353   80857 logs.go:276] 0 containers: []
	W0717 18:43:30.996361   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:30.996367   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:30.996419   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:31.028193   80857 cri.go:89] found id: ""
	I0717 18:43:31.028220   80857 logs.go:276] 0 containers: []
	W0717 18:43:31.028228   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:31.028237   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:31.028251   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:31.040465   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:31.040490   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:31.108127   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:31.108150   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:31.108164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:31.187763   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:31.187797   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:31.224238   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:31.224266   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:33.776145   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:33.790045   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:33.790108   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:33.823471   80857 cri.go:89] found id: ""
	I0717 18:43:33.823495   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.823505   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:33.823512   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:33.823568   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:33.860205   80857 cri.go:89] found id: ""
	I0717 18:43:33.860233   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.860243   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:33.860250   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:33.860298   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:33.895469   80857 cri.go:89] found id: ""
	I0717 18:43:33.895499   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.895509   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:33.895516   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:33.895578   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:33.938483   80857 cri.go:89] found id: ""
	I0717 18:43:33.938517   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.938527   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:33.938534   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:33.938596   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:33.973265   80857 cri.go:89] found id: ""
	I0717 18:43:33.973293   80857 logs.go:276] 0 containers: []
	W0717 18:43:33.973303   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:33.973309   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:33.973382   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:34.012669   80857 cri.go:89] found id: ""
	I0717 18:43:34.012696   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.012704   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:34.012710   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:34.012760   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:34.045522   80857 cri.go:89] found id: ""
	I0717 18:43:34.045547   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.045557   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:34.045564   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:34.045636   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:34.082927   80857 cri.go:89] found id: ""
	I0717 18:43:34.082957   80857 logs.go:276] 0 containers: []
	W0717 18:43:34.082968   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:34.082979   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:34.082993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:34.134133   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:34.134168   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:34.146814   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:34.146837   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:34.217050   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:34.217079   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:34.217094   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:34.298572   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:34.298610   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:35.169868   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:37.170083   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:35.324578   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:37.825006   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:36.096437   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:38.096991   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:36.838187   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:36.850888   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:36.850948   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:36.883132   80857 cri.go:89] found id: ""
	I0717 18:43:36.883153   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.883160   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:36.883166   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:36.883209   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:36.918310   80857 cri.go:89] found id: ""
	I0717 18:43:36.918339   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.918348   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:36.918353   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:36.918411   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:36.949794   80857 cri.go:89] found id: ""
	I0717 18:43:36.949818   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.949825   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:36.949831   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:36.949889   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:36.980913   80857 cri.go:89] found id: ""
	I0717 18:43:36.980951   80857 logs.go:276] 0 containers: []
	W0717 18:43:36.980962   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:36.980969   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:36.981029   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:37.014295   80857 cri.go:89] found id: ""
	I0717 18:43:37.014322   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.014330   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:37.014336   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:37.014397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:37.048555   80857 cri.go:89] found id: ""
	I0717 18:43:37.048581   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.048589   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:37.048595   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:37.048643   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:37.080533   80857 cri.go:89] found id: ""
	I0717 18:43:37.080561   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.080571   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:37.080577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:37.080640   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:37.112919   80857 cri.go:89] found id: ""
	I0717 18:43:37.112952   80857 logs.go:276] 0 containers: []
	W0717 18:43:37.112963   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:37.112973   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:37.112987   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:37.165012   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:37.165044   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:37.177860   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:37.177881   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:37.244776   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:37.244806   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:37.244824   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:37.322949   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:37.322976   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:39.861056   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:39.884509   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:39.884592   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:39.931317   80857 cri.go:89] found id: ""
	I0717 18:43:39.931341   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.931348   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:39.931354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:39.931410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:39.971571   80857 cri.go:89] found id: ""
	I0717 18:43:39.971615   80857 logs.go:276] 0 containers: []
	W0717 18:43:39.971626   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:39.971634   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:39.971692   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:40.003851   80857 cri.go:89] found id: ""
	I0717 18:43:40.003875   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.003883   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:40.003891   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:40.003942   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:40.040403   80857 cri.go:89] found id: ""
	I0717 18:43:40.040430   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.040440   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:40.040445   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:40.040498   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:39.669960   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.170056   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.325792   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.824332   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.596935   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:42.597153   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:40.071893   80857 cri.go:89] found id: ""
	I0717 18:43:40.071919   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.071927   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:40.071932   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:40.071979   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:40.111020   80857 cri.go:89] found id: ""
	I0717 18:43:40.111042   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.111052   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:40.111059   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:40.111117   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:40.142872   80857 cri.go:89] found id: ""
	I0717 18:43:40.142899   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.142910   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:40.142917   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:40.142975   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:40.179919   80857 cri.go:89] found id: ""
	I0717 18:43:40.179944   80857 logs.go:276] 0 containers: []
	W0717 18:43:40.179953   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:40.179963   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:40.179980   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:40.233033   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:40.233075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:40.246272   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:40.246299   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:40.311988   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:40.312014   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:40.312033   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:40.395622   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:40.395658   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:42.935843   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:42.949893   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:42.949957   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:42.982429   80857 cri.go:89] found id: ""
	I0717 18:43:42.982451   80857 logs.go:276] 0 containers: []
	W0717 18:43:42.982459   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:42.982464   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:42.982512   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:43.018637   80857 cri.go:89] found id: ""
	I0717 18:43:43.018659   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.018666   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:43.018672   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:43.018719   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:43.054274   80857 cri.go:89] found id: ""
	I0717 18:43:43.054301   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.054310   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:43.054317   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:43.054368   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:43.093382   80857 cri.go:89] found id: ""
	I0717 18:43:43.093408   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.093418   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:43.093425   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:43.093484   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:43.125830   80857 cri.go:89] found id: ""
	I0717 18:43:43.125862   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.125871   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:43.125878   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:43.125936   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:43.157110   80857 cri.go:89] found id: ""
	I0717 18:43:43.157138   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.157147   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:43.157154   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:43.157215   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:43.188320   80857 cri.go:89] found id: ""
	I0717 18:43:43.188342   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.188349   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:43.188354   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:43.188400   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:43.220650   80857 cri.go:89] found id: ""
	I0717 18:43:43.220679   80857 logs.go:276] 0 containers: []
	W0717 18:43:43.220686   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:43.220695   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:43.220707   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:43.259320   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:43.259358   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:43.308308   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:43.308346   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:43.321865   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:43.321894   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:43.396110   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:43.396135   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:43.396147   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:44.670206   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.169748   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.323427   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.324066   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.096564   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:47.105605   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:45.976091   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:45.988956   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:45.989015   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:46.022277   80857 cri.go:89] found id: ""
	I0717 18:43:46.022307   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.022318   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:46.022325   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:46.022398   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:46.057607   80857 cri.go:89] found id: ""
	I0717 18:43:46.057636   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.057646   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:46.057653   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:46.057712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:46.089275   80857 cri.go:89] found id: ""
	I0717 18:43:46.089304   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.089313   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:46.089321   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:46.089378   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:46.123686   80857 cri.go:89] found id: ""
	I0717 18:43:46.123717   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.123726   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:46.123731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:46.123784   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:46.166600   80857 cri.go:89] found id: ""
	I0717 18:43:46.166628   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.166638   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:46.166645   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:46.166704   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:46.202518   80857 cri.go:89] found id: ""
	I0717 18:43:46.202543   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.202562   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:46.202568   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:46.202612   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:46.234573   80857 cri.go:89] found id: ""
	I0717 18:43:46.234608   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.234620   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:46.234627   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:46.234687   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:46.265305   80857 cri.go:89] found id: ""
	I0717 18:43:46.265333   80857 logs.go:276] 0 containers: []
	W0717 18:43:46.265343   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:46.265355   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:46.265369   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:46.342963   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:46.342993   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:46.377170   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:46.377208   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:46.429641   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:46.429673   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:46.442168   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:46.442195   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:46.516656   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.016877   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:49.030308   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:49.030375   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:49.062400   80857 cri.go:89] found id: ""
	I0717 18:43:49.062423   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.062430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:49.062435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:49.062486   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:49.097110   80857 cri.go:89] found id: ""
	I0717 18:43:49.097131   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.097137   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:49.097142   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:49.097190   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:49.128535   80857 cri.go:89] found id: ""
	I0717 18:43:49.128558   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.128571   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:49.128577   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:49.128626   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:49.162505   80857 cri.go:89] found id: ""
	I0717 18:43:49.162530   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.162538   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:49.162544   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:49.162594   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:49.194912   80857 cri.go:89] found id: ""
	I0717 18:43:49.194939   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.194950   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:49.194957   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:49.195025   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:49.227055   80857 cri.go:89] found id: ""
	I0717 18:43:49.227083   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.227092   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:49.227098   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:49.227147   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:49.259568   80857 cri.go:89] found id: ""
	I0717 18:43:49.259596   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.259607   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:49.259618   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:49.259673   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:49.291700   80857 cri.go:89] found id: ""
	I0717 18:43:49.291727   80857 logs.go:276] 0 containers: []
	W0717 18:43:49.291735   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:49.291744   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:49.291755   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:49.344600   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:49.344636   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:49.357680   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:49.357705   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:49.427160   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:49.427180   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:49.427192   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:49.504151   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:49.504182   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:49.170632   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.170953   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:49.324205   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.823181   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:53.824989   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:49.596298   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:51.596383   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:54.097260   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:52.041591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:52.054775   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:52.054841   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:52.085858   80857 cri.go:89] found id: ""
	I0717 18:43:52.085892   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.085904   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:52.085911   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:52.085961   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:52.124100   80857 cri.go:89] found id: ""
	I0717 18:43:52.124122   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.124130   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:52.124135   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:52.124195   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:52.155056   80857 cri.go:89] found id: ""
	I0717 18:43:52.155079   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.155087   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:52.155093   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:52.155154   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:52.189318   80857 cri.go:89] found id: ""
	I0717 18:43:52.189349   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.189359   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:52.189366   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:52.189430   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:52.222960   80857 cri.go:89] found id: ""
	I0717 18:43:52.222988   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.222999   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:52.223006   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:52.223071   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:52.255807   80857 cri.go:89] found id: ""
	I0717 18:43:52.255834   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.255841   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:52.255847   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:52.255904   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:52.286596   80857 cri.go:89] found id: ""
	I0717 18:43:52.286628   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.286641   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:52.286648   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:52.286703   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:52.319607   80857 cri.go:89] found id: ""
	I0717 18:43:52.319632   80857 logs.go:276] 0 containers: []
	W0717 18:43:52.319641   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:52.319652   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:52.319666   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:52.371270   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:52.371301   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:52.384771   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:52.384803   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:52.456408   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:52.456432   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:52.456444   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:52.533724   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:52.533759   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:53.171080   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:55.669642   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:56.324311   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:58.823693   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:56.595916   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:58.597526   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:43:55.072554   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:55.087005   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:55.087086   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:55.123300   80857 cri.go:89] found id: ""
	I0717 18:43:55.123325   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.123331   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:55.123336   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:55.123390   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:55.158476   80857 cri.go:89] found id: ""
	I0717 18:43:55.158502   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.158509   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:55.158515   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:55.158572   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:55.198489   80857 cri.go:89] found id: ""
	I0717 18:43:55.198511   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.198518   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:55.198524   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:55.198567   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:55.230901   80857 cri.go:89] found id: ""
	I0717 18:43:55.230933   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.230943   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:55.230951   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:55.231028   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:55.262303   80857 cri.go:89] found id: ""
	I0717 18:43:55.262326   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.262333   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:55.262340   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:55.262393   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:55.293889   80857 cri.go:89] found id: ""
	I0717 18:43:55.293916   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.293925   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:55.293930   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:55.293983   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:55.325695   80857 cri.go:89] found id: ""
	I0717 18:43:55.325720   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.325727   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:55.325737   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:55.325797   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:55.360021   80857 cri.go:89] found id: ""
	I0717 18:43:55.360044   80857 logs.go:276] 0 containers: []
	W0717 18:43:55.360052   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:55.360059   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:55.360075   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:55.372088   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:55.372111   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:55.442073   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:55.442101   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:55.442116   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:55.521733   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:55.521763   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:55.558914   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:55.558947   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.114001   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:43:58.126283   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:43:58.126353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:43:58.162769   80857 cri.go:89] found id: ""
	I0717 18:43:58.162800   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.162810   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:43:58.162815   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:43:58.162862   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:43:58.197359   80857 cri.go:89] found id: ""
	I0717 18:43:58.197386   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.197397   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:43:58.197404   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:43:58.197465   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:43:58.229662   80857 cri.go:89] found id: ""
	I0717 18:43:58.229691   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.229700   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:43:58.229707   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:43:58.229766   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:43:58.261810   80857 cri.go:89] found id: ""
	I0717 18:43:58.261832   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.261838   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:43:58.261844   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:43:58.261900   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:43:58.293243   80857 cri.go:89] found id: ""
	I0717 18:43:58.293271   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.293282   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:43:58.293290   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:43:58.293353   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:43:58.325689   80857 cri.go:89] found id: ""
	I0717 18:43:58.325714   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.325724   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:43:58.325731   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:43:58.325785   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:43:58.357381   80857 cri.go:89] found id: ""
	I0717 18:43:58.357406   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.357416   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:43:58.357422   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:43:58.357483   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:43:58.389859   80857 cri.go:89] found id: ""
	I0717 18:43:58.389888   80857 logs.go:276] 0 containers: []
	W0717 18:43:58.389900   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:43:58.389910   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:43:58.389926   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:43:58.458034   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:43:58.458058   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:43:58.458072   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:43:58.536134   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:43:58.536164   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:43:58.573808   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:43:58.573834   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:43:58.624956   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:43:58.624985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:43:58.170810   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:00.670184   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:02.671370   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:00.824682   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:02.824874   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:01.096294   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:03.096348   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:01.138486   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:01.151547   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:01.151610   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:01.186397   80857 cri.go:89] found id: ""
	I0717 18:44:01.186422   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.186430   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:01.186435   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:01.186487   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:01.220797   80857 cri.go:89] found id: ""
	I0717 18:44:01.220822   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.220830   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:01.220849   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:01.220894   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:01.257640   80857 cri.go:89] found id: ""
	I0717 18:44:01.257666   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.257674   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:01.257680   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:01.257727   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:01.295393   80857 cri.go:89] found id: ""
	I0717 18:44:01.295418   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.295425   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:01.295432   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:01.295493   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:01.327242   80857 cri.go:89] found id: ""
	I0717 18:44:01.327261   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.327268   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:01.327273   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:01.327319   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:01.358559   80857 cri.go:89] found id: ""
	I0717 18:44:01.358586   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.358593   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:01.358599   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:01.358647   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:01.392301   80857 cri.go:89] found id: ""
	I0717 18:44:01.392332   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.392341   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:01.392346   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:01.392407   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:01.424422   80857 cri.go:89] found id: ""
	I0717 18:44:01.424449   80857 logs.go:276] 0 containers: []
	W0717 18:44:01.424457   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:01.424465   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:01.424477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:01.473298   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:01.473332   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:01.487444   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:01.487471   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:01.552548   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:01.552572   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:01.552586   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:01.634203   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:01.634242   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:04.175618   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:04.188071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:04.188150   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:04.222149   80857 cri.go:89] found id: ""
	I0717 18:44:04.222173   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.222180   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:04.222185   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:04.222242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:04.257174   80857 cri.go:89] found id: ""
	I0717 18:44:04.257211   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.257223   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:04.257232   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:04.257284   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:04.291628   80857 cri.go:89] found id: ""
	I0717 18:44:04.291653   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.291666   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:04.291673   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:04.291733   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:04.325935   80857 cri.go:89] found id: ""
	I0717 18:44:04.325964   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.325975   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:04.325982   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:04.326043   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:04.356610   80857 cri.go:89] found id: ""
	I0717 18:44:04.356638   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.356648   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:04.356655   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:04.356712   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:04.387728   80857 cri.go:89] found id: ""
	I0717 18:44:04.387764   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.387773   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:04.387782   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:04.387840   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:04.421452   80857 cri.go:89] found id: ""
	I0717 18:44:04.421479   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.421488   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:04.421495   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:04.421555   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:04.453111   80857 cri.go:89] found id: ""
	I0717 18:44:04.453139   80857 logs.go:276] 0 containers: []
	W0717 18:44:04.453150   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:04.453161   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:04.453175   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:04.506185   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:04.506215   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:04.523611   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:04.523638   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:04.591051   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:04.591074   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:04.591091   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:04.666603   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:04.666647   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:05.169836   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.170112   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:05.324886   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.325488   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:05.096545   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.598131   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:07.205208   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:07.218182   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:07.218236   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:07.254521   80857 cri.go:89] found id: ""
	I0717 18:44:07.254554   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.254565   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:07.254571   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:07.254638   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:07.293622   80857 cri.go:89] found id: ""
	I0717 18:44:07.293650   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.293658   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:07.293663   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:07.293711   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:07.331056   80857 cri.go:89] found id: ""
	I0717 18:44:07.331083   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.331091   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:07.331097   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:07.331157   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:07.368445   80857 cri.go:89] found id: ""
	I0717 18:44:07.368476   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.368484   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:07.368491   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:07.368541   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:07.405507   80857 cri.go:89] found id: ""
	I0717 18:44:07.405539   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.405550   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:07.405557   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:07.405617   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:07.444752   80857 cri.go:89] found id: ""
	I0717 18:44:07.444782   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.444792   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:07.444801   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:07.444859   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:07.486976   80857 cri.go:89] found id: ""
	I0717 18:44:07.487006   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.487016   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:07.487024   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:07.487073   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:07.522561   80857 cri.go:89] found id: ""
	I0717 18:44:07.522590   80857 logs.go:276] 0 containers: []
	W0717 18:44:07.522599   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:07.522607   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:07.522618   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:07.576350   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:07.576382   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:07.591491   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:07.591517   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:07.659860   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:07.659886   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:07.659902   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:07.743445   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:07.743478   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:09.170601   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:11.170851   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:09.824120   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:11.826838   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:10.097009   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:12.596778   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:10.284468   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:10.296549   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:10.296608   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:10.331209   80857 cri.go:89] found id: ""
	I0717 18:44:10.331236   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.331246   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:10.331252   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:10.331297   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:10.363911   80857 cri.go:89] found id: ""
	I0717 18:44:10.363941   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.363949   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:10.363954   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:10.364001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:10.395935   80857 cri.go:89] found id: ""
	I0717 18:44:10.395960   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.395970   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:10.395977   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:10.396021   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:10.428307   80857 cri.go:89] found id: ""
	I0717 18:44:10.428337   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.428344   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:10.428351   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:10.428397   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:10.459615   80857 cri.go:89] found id: ""
	I0717 18:44:10.459643   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.459654   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:10.459661   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:10.459715   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:10.491593   80857 cri.go:89] found id: ""
	I0717 18:44:10.491617   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.491628   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:10.491636   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:10.491693   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:10.526822   80857 cri.go:89] found id: ""
	I0717 18:44:10.526846   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.526853   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:10.526858   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:10.526918   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:10.561037   80857 cri.go:89] found id: ""
	I0717 18:44:10.561066   80857 logs.go:276] 0 containers: []
	W0717 18:44:10.561077   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:10.561087   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:10.561101   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:10.643333   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:10.643364   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:10.684673   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:10.684704   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:10.736191   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:10.736220   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:10.748762   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:10.748793   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:10.812121   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.313033   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:13.325692   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:13.325756   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:13.358306   80857 cri.go:89] found id: ""
	I0717 18:44:13.358336   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.358345   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:13.358352   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:13.358410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:13.393233   80857 cri.go:89] found id: ""
	I0717 18:44:13.393264   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.393274   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:13.393282   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:13.393340   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:13.424256   80857 cri.go:89] found id: ""
	I0717 18:44:13.424287   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.424298   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:13.424305   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:13.424358   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:13.454988   80857 cri.go:89] found id: ""
	I0717 18:44:13.455010   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.455018   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:13.455023   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:13.455069   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:13.491019   80857 cri.go:89] found id: ""
	I0717 18:44:13.491046   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.491054   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:13.491060   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:13.491107   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:13.523045   80857 cri.go:89] found id: ""
	I0717 18:44:13.523070   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.523079   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:13.523085   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:13.523131   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:13.555442   80857 cri.go:89] found id: ""
	I0717 18:44:13.555470   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.555483   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:13.555489   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:13.555549   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:13.588891   80857 cri.go:89] found id: ""
	I0717 18:44:13.588921   80857 logs.go:276] 0 containers: []
	W0717 18:44:13.588931   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:13.588958   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:13.588973   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:13.663635   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:13.663659   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:13.663674   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:13.749098   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:13.749135   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:13.785489   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:13.785524   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:13.837098   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:13.837128   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:13.671215   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:15.671282   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:17.671466   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:14.324573   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:16.826063   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:15.095967   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:17.096403   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:19.096478   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:16.350571   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:16.364398   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:16.364470   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:16.400677   80857 cri.go:89] found id: ""
	I0717 18:44:16.400708   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.400719   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:16.400726   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:16.400781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:16.431715   80857 cri.go:89] found id: ""
	I0717 18:44:16.431743   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.431754   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:16.431760   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:16.431836   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:16.465115   80857 cri.go:89] found id: ""
	I0717 18:44:16.465148   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.465160   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:16.465167   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:16.465230   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:16.497906   80857 cri.go:89] found id: ""
	I0717 18:44:16.497933   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.497944   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:16.497952   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:16.498008   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:16.534066   80857 cri.go:89] found id: ""
	I0717 18:44:16.534097   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.534108   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:16.534116   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:16.534173   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:16.566679   80857 cri.go:89] found id: ""
	I0717 18:44:16.566706   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.566717   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:16.566724   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:16.566781   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:16.598397   80857 cri.go:89] found id: ""
	I0717 18:44:16.598416   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.598422   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:16.598427   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:16.598480   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:16.629943   80857 cri.go:89] found id: ""
	I0717 18:44:16.629975   80857 logs.go:276] 0 containers: []
	W0717 18:44:16.629998   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:16.630017   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:16.630032   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:16.706452   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:16.706489   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:16.744971   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:16.745003   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:16.796450   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:16.796477   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:16.809192   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:16.809217   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:16.875699   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.376821   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:19.389921   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:19.389980   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:19.423837   80857 cri.go:89] found id: ""
	I0717 18:44:19.423862   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.423870   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:19.423877   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:19.423934   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:19.468267   80857 cri.go:89] found id: ""
	I0717 18:44:19.468293   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.468305   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:19.468311   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:19.468371   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:19.503286   80857 cri.go:89] found id: ""
	I0717 18:44:19.503315   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.503326   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:19.503333   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:19.503391   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:19.535505   80857 cri.go:89] found id: ""
	I0717 18:44:19.535531   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.535542   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:19.535548   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:19.535607   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:19.568678   80857 cri.go:89] found id: ""
	I0717 18:44:19.568704   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.568711   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:19.568717   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:19.568762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:19.604027   80857 cri.go:89] found id: ""
	I0717 18:44:19.604053   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.604064   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:19.604071   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:19.604127   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:19.637357   80857 cri.go:89] found id: ""
	I0717 18:44:19.637387   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.637397   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:19.637403   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:19.637450   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:19.669094   80857 cri.go:89] found id: ""
	I0717 18:44:19.669126   80857 logs.go:276] 0 containers: []
	W0717 18:44:19.669136   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:19.669145   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:19.669160   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:19.720218   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:19.720248   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:19.733320   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:19.733343   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:19.796229   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:19.796252   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:19.796267   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:19.871157   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:19.871186   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:20.170824   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:22.670239   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:19.324037   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:21.324408   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:23.824030   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:21.098734   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:23.595859   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:22.409012   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:22.421477   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:22.421546   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:22.457314   80857 cri.go:89] found id: ""
	I0717 18:44:22.457337   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.457346   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:22.457354   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:22.457410   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:22.490998   80857 cri.go:89] found id: ""
	I0717 18:44:22.491022   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.491030   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:22.491037   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:22.491090   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:22.523904   80857 cri.go:89] found id: ""
	I0717 18:44:22.523934   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.523945   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:22.523953   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:22.524012   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:22.555917   80857 cri.go:89] found id: ""
	I0717 18:44:22.555947   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.555956   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:22.555962   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:22.556026   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:22.588510   80857 cri.go:89] found id: ""
	I0717 18:44:22.588552   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.588565   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:22.588574   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:22.588652   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:22.621854   80857 cri.go:89] found id: ""
	I0717 18:44:22.621883   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.621893   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:22.621901   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:22.621956   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:22.653897   80857 cri.go:89] found id: ""
	I0717 18:44:22.653921   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.653931   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:22.653938   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:22.654001   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:22.685731   80857 cri.go:89] found id: ""
	I0717 18:44:22.685760   80857 logs.go:276] 0 containers: []
	W0717 18:44:22.685770   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:22.685779   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:22.685792   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:22.735514   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:22.735545   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:22.748148   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:22.748169   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:22.809637   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:22.809666   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:22.809682   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:22.886014   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:22.886050   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:24.670825   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:27.169930   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.824694   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:28.324620   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.597423   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:28.095788   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:25.431906   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:25.444866   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:25.444965   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:25.477211   80857 cri.go:89] found id: ""
	I0717 18:44:25.477245   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.477257   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:25.477264   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:25.477366   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:25.512077   80857 cri.go:89] found id: ""
	I0717 18:44:25.512108   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.512120   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:25.512127   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:25.512177   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:25.543953   80857 cri.go:89] found id: ""
	I0717 18:44:25.543974   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.543981   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:25.543987   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:25.544032   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:25.574955   80857 cri.go:89] found id: ""
	I0717 18:44:25.574980   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.574990   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:25.574997   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:25.575054   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:25.607078   80857 cri.go:89] found id: ""
	I0717 18:44:25.607106   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.607117   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:25.607125   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:25.607188   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:25.643129   80857 cri.go:89] found id: ""
	I0717 18:44:25.643152   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.643162   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:25.643169   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:25.643225   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:25.678220   80857 cri.go:89] found id: ""
	I0717 18:44:25.678241   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.678249   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:25.678254   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:25.678309   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:25.715405   80857 cri.go:89] found id: ""
	I0717 18:44:25.715433   80857 logs.go:276] 0 containers: []
	W0717 18:44:25.715446   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:25.715458   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:25.715474   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:25.772978   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:25.773008   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:25.786559   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:25.786587   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:25.853369   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:25.853386   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:25.853398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:25.954346   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:25.954398   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:28.498591   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:28.511701   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:44:28.511762   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:44:28.543527   80857 cri.go:89] found id: ""
	I0717 18:44:28.543551   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.543559   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:44:28.543565   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:44:28.543624   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:44:28.574737   80857 cri.go:89] found id: ""
	I0717 18:44:28.574762   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.574769   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:44:28.574776   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:44:28.574835   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:44:28.608129   80857 cri.go:89] found id: ""
	I0717 18:44:28.608166   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.608174   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:44:28.608179   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:44:28.608234   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:44:28.644324   80857 cri.go:89] found id: ""
	I0717 18:44:28.644348   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.644357   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:44:28.644371   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:44:28.644426   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:44:28.675830   80857 cri.go:89] found id: ""
	I0717 18:44:28.675859   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.675870   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:44:28.675877   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:44:28.675937   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:44:28.705713   80857 cri.go:89] found id: ""
	I0717 18:44:28.705749   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.705760   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:44:28.705768   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:44:28.705821   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:44:28.738648   80857 cri.go:89] found id: ""
	I0717 18:44:28.738677   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.738688   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:44:28.738695   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:44:28.738752   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:44:28.768877   80857 cri.go:89] found id: ""
	I0717 18:44:28.768906   80857 logs.go:276] 0 containers: []
	W0717 18:44:28.768916   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:44:28.768927   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:44:28.768953   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:44:28.818951   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:44:28.818985   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:44:28.832813   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:44:28.832843   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:44:28.910030   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:44:28.910051   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:44:28.910063   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:44:28.986706   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:44:28.986743   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 18:44:29.170559   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:31.669543   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:30.824906   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:33.324261   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:30.096916   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:32.597522   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:31.529154   80857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:44:31.543261   80857 kubeadm.go:597] duration metric: took 4m4.346231712s to restartPrimaryControlPlane
	W0717 18:44:31.543327   80857 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:44:31.543350   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:44:33.670602   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:36.169669   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:35.325082   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:37.824371   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:35.096445   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:37.097375   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:39.098005   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:36.752008   80857 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.208633612s)
	I0717 18:44:36.752076   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:44:36.765411   80857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:44:36.774556   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:44:36.783406   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:44:36.783427   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:44:36.783479   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:44:36.791953   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:44:36.792007   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:44:36.800929   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:44:36.808988   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:44:36.809049   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:44:36.817312   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.825586   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:44:36.825648   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:44:36.834783   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:44:36.843109   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:44:36.843166   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:44:36.852276   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:44:37.058251   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:44:38.170695   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.671193   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.324181   80401 pod_ready.go:102] pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:40.818959   80401 pod_ready.go:81] duration metric: took 4m0.000961975s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" ...
	E0717 18:44:40.818998   80401 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-mbtvd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:44:40.819017   80401 pod_ready.go:38] duration metric: took 4m12.045669741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:44:40.819042   80401 kubeadm.go:597] duration metric: took 4m22.276381575s to restartPrimaryControlPlane
	W0717 18:44:40.819091   80401 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:44:40.819116   80401 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:44:41.597013   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:44.097096   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:43.170145   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:45.670626   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:46.595570   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:48.598459   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:48.169822   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:50.170686   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:52.670255   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:51.097591   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:53.597467   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:55.170853   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:57.670157   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:56.096506   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:44:58.107493   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:00.170210   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:02.672286   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:00.596747   81068 pod_ready.go:102] pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:02.590517   81068 pod_ready.go:81] duration metric: took 4m0.000120095s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" ...
	E0717 18:45:02.590549   81068 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-j9qhx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:45:02.590572   81068 pod_ready.go:38] duration metric: took 4m10.536894511s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:02.590607   81068 kubeadm.go:597] duration metric: took 4m18.045314131s to restartPrimaryControlPlane
	W0717 18:45:02.590672   81068 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:45:02.590702   81068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:45:06.920900   80401 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.10175503s)
	I0717 18:45:06.921009   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:06.952090   80401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:06.962820   80401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:06.979545   80401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:06.979577   80401 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:06.979641   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:45:06.990493   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:06.990574   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:07.014934   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:45:07.024381   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:07.024449   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:07.033573   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:45:07.042495   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:07.042552   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:07.051233   80401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:45:07.059616   80401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:07.059674   80401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:07.068348   80401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:07.112042   80401 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0717 18:45:07.112188   80401 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:45:07.229262   80401 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:45:07.229356   80401 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:45:07.229491   80401 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0717 18:45:07.239251   80401 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:45:05.171753   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:07.669753   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:07.241949   80401 out.go:204]   - Generating certificates and keys ...
	I0717 18:45:07.242054   80401 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:45:07.242150   80401 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:45:07.242253   80401 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:45:07.242355   80401 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:45:07.242459   80401 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:45:07.242536   80401 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:45:07.242620   80401 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:45:07.242721   80401 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:45:07.242835   80401 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:45:07.242937   80401 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:45:07.242998   80401 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:45:07.243068   80401 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:45:07.641462   80401 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:45:07.705768   80401 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:45:07.821102   80401 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:45:07.898702   80401 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:45:08.107470   80401 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:45:08.107945   80401 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:45:08.111615   80401 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:45:08.113464   80401 out.go:204]   - Booting up control plane ...
	I0717 18:45:08.113572   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:45:08.113695   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:45:08.113843   80401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:45:08.131411   80401 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:45:08.137563   80401 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:45:08.137622   80401 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:45:08.268403   80401 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:45:08.268519   80401 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:45:08.769158   80401 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.386396ms
	I0717 18:45:08.769265   80401 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:45:09.669968   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:11.670466   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:13.771873   80401 kubeadm.go:310] [api-check] The API server is healthy after 5.002458706s
	I0717 18:45:13.789581   80401 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:45:13.804268   80401 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:45:13.831438   80401 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:45:13.831641   80401 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-066175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:45:13.845165   80401 kubeadm.go:310] [bootstrap-token] Using token: fscs12.0o2n9pl0vxdw75m1
	I0717 18:45:13.846851   80401 out.go:204]   - Configuring RBAC rules ...
	I0717 18:45:13.847002   80401 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:45:13.854788   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:45:13.866828   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:45:13.871541   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:45:13.875508   80401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:45:13.880068   80401 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:45:14.179824   80401 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:45:14.669946   80401 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:45:15.180053   80401 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:45:15.180076   80401 kubeadm.go:310] 
	I0717 18:45:15.180180   80401 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:45:15.180201   80401 kubeadm.go:310] 
	I0717 18:45:15.180287   80401 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:45:15.180295   80401 kubeadm.go:310] 
	I0717 18:45:15.180348   80401 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:45:15.180437   80401 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:45:15.180517   80401 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:45:15.180530   80401 kubeadm.go:310] 
	I0717 18:45:15.180607   80401 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:45:15.180617   80401 kubeadm.go:310] 
	I0717 18:45:15.180682   80401 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:45:15.180692   80401 kubeadm.go:310] 
	I0717 18:45:15.180775   80401 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:45:15.180871   80401 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:45:15.180984   80401 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:45:15.180996   80401 kubeadm.go:310] 
	I0717 18:45:15.181107   80401 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:45:15.181221   80401 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:45:15.181234   80401 kubeadm.go:310] 
	I0717 18:45:15.181370   80401 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fscs12.0o2n9pl0vxdw75m1 \
	I0717 18:45:15.181523   80401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:45:15.181571   80401 kubeadm.go:310] 	--control-plane 
	I0717 18:45:15.181579   80401 kubeadm.go:310] 
	I0717 18:45:15.181679   80401 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:45:15.181690   80401 kubeadm.go:310] 
	I0717 18:45:15.181802   80401 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fscs12.0o2n9pl0vxdw75m1 \
	I0717 18:45:15.181954   80401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:45:15.182460   80401 kubeadm.go:310] W0717 18:45:07.084606    2905 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:45:15.182848   80401 kubeadm.go:310] W0717 18:45:07.085710    2905 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 18:45:15.183017   80401 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:15.183038   80401 cni.go:84] Creating CNI manager for ""
	I0717 18:45:15.183048   80401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:45:15.185022   80401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:45:13.671267   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:15.671682   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:15.186444   80401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:45:15.197514   80401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:45:15.216000   80401 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:45:15.216097   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:15.216157   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-066175 minikube.k8s.io/updated_at=2024_07_17T18_45_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=no-preload-066175 minikube.k8s.io/primary=true
	I0717 18:45:15.251049   80401 ops.go:34] apiserver oom_adj: -16
	I0717 18:45:15.383234   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:15.884265   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:16.384075   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:16.883375   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:17.383864   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:17.884072   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:18.383283   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:18.883644   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:19.384366   80401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:19.507413   80401 kubeadm.go:1113] duration metric: took 4.291369352s to wait for elevateKubeSystemPrivileges
	I0717 18:45:19.507450   80401 kubeadm.go:394] duration metric: took 5m1.019320853s to StartCluster
	I0717 18:45:19.507473   80401 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:19.507570   80401 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:45:19.510004   80401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:19.510329   80401 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.216 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:45:19.510401   80401 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:45:19.510484   80401 addons.go:69] Setting storage-provisioner=true in profile "no-preload-066175"
	I0717 18:45:19.510515   80401 addons.go:234] Setting addon storage-provisioner=true in "no-preload-066175"
	W0717 18:45:19.510523   80401 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:45:19.510530   80401 config.go:182] Loaded profile config "no-preload-066175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0717 18:45:19.510531   80401 addons.go:69] Setting default-storageclass=true in profile "no-preload-066175"
	I0717 18:45:19.510553   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.510551   80401 addons.go:69] Setting metrics-server=true in profile "no-preload-066175"
	I0717 18:45:19.510572   80401 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-066175"
	I0717 18:45:19.510586   80401 addons.go:234] Setting addon metrics-server=true in "no-preload-066175"
	W0717 18:45:19.510596   80401 addons.go:243] addon metrics-server should already be in state true
	I0717 18:45:19.510628   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.510986   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.510986   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.511027   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.511047   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.511075   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.511102   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.512057   80401 out.go:177] * Verifying Kubernetes components...
	I0717 18:45:19.513662   80401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:45:19.532038   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40719
	I0717 18:45:19.532059   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45825
	I0717 18:45:19.532048   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41755
	I0717 18:45:19.532557   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.532562   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.532701   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.533086   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533107   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533246   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533261   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533276   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.533295   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.533455   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533671   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533732   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.533851   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.533933   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.533958   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.534280   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.534310   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.537749   80401 addons.go:234] Setting addon default-storageclass=true in "no-preload-066175"
	W0717 18:45:19.537773   80401 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:45:19.537804   80401 host.go:66] Checking if "no-preload-066175" exists ...
	I0717 18:45:19.538168   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.538206   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.550488   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I0717 18:45:19.551013   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.551625   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.551647   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.552005   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.552335   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.553613   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I0717 18:45:19.553633   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0717 18:45:19.554184   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.554243   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.554271   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.554784   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.554801   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.554965   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.554986   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.555220   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.555350   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.555393   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.555995   80401 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:45:19.556103   80401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:19.556229   80401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:19.556825   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.557482   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:45:19.557499   80401 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:45:19.557517   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.558437   80401 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:45:19.560069   80401 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:19.560084   80401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:45:19.560100   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.560881   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.560908   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.560932   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.561265   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.561477   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.561633   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.561732   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.563601   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.564025   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.564197   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.564219   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.564378   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.564549   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.564686   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.579324   80401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37271
	I0717 18:45:19.579786   80401 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:19.580331   80401 main.go:141] libmachine: Using API Version  1
	I0717 18:45:19.580354   80401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:19.580697   80401 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:19.580925   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetState
	I0717 18:45:19.582700   80401 main.go:141] libmachine: (no-preload-066175) Calling .DriverName
	I0717 18:45:19.582910   80401 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:19.582923   80401 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:45:19.582936   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHHostname
	I0717 18:45:19.585938   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.586387   80401 main.go:141] libmachine: (no-preload-066175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:a5:17", ip: ""} in network mk-no-preload-066175: {Iface:virbr3 ExpiryTime:2024-07-17 19:39:55 +0000 UTC Type:0 Mac:52:54:00:72:a5:17 Iaid: IPaddr:192.168.72.216 Prefix:24 Hostname:no-preload-066175 Clientid:01:52:54:00:72:a5:17}
	I0717 18:45:19.586414   80401 main.go:141] libmachine: (no-preload-066175) DBG | domain no-preload-066175 has defined IP address 192.168.72.216 and MAC address 52:54:00:72:a5:17 in network mk-no-preload-066175
	I0717 18:45:19.586605   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHPort
	I0717 18:45:19.586758   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHKeyPath
	I0717 18:45:19.586920   80401 main.go:141] libmachine: (no-preload-066175) Calling .GetSSHUsername
	I0717 18:45:19.587061   80401 sshutil.go:53] new ssh client: &{IP:192.168.72.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/no-preload-066175/id_rsa Username:docker}
	I0717 18:45:19.706369   80401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:45:19.727936   80401 node_ready.go:35] waiting up to 6m0s for node "no-preload-066175" to be "Ready" ...
	I0717 18:45:19.738822   80401 node_ready.go:49] node "no-preload-066175" has status "Ready":"True"
	I0717 18:45:19.738841   80401 node_ready.go:38] duration metric: took 10.872501ms for node "no-preload-066175" to be "Ready" ...
	I0717 18:45:19.738852   80401 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:19.744979   80401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:19.854180   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:19.873723   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:45:19.873746   80401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:45:19.883867   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:19.902041   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:45:19.902064   80401 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:45:19.926788   80401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:19.926867   80401 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:45:19.953788   80401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:20.571091   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.571119   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571119   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.571137   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571394   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.571439   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.571456   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.571463   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.571459   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.572575   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.571494   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.572789   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.572761   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.572804   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.572815   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.572824   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.573027   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.573044   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.589595   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.589614   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.589913   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.589940   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.589918   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.789754   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.789776   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.790082   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.790103   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.790113   80401 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:20.790123   80401 main.go:141] libmachine: (no-preload-066175) Calling .Close
	I0717 18:45:20.790416   80401 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:20.790457   80401 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:20.790470   80401 addons.go:475] Verifying addon metrics-server=true in "no-preload-066175"
	I0717 18:45:20.790416   80401 main.go:141] libmachine: (no-preload-066175) DBG | Closing plugin on server side
	I0717 18:45:20.792175   80401 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 18:45:18.169876   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:20.170261   80180 pod_ready.go:102] pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:22.664656   80180 pod_ready.go:81] duration metric: took 4m0.000669682s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" ...
	E0717 18:45:22.664696   80180 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-8md44" in "kube-system" namespace to be "Ready" (will not retry!)
	I0717 18:45:22.664716   80180 pod_ready.go:38] duration metric: took 4m9.027997903s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:22.664746   80180 kubeadm.go:597] duration metric: took 4m19.955287366s to restartPrimaryControlPlane
	W0717 18:45:22.664823   80180 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0717 18:45:22.664854   80180 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:45:20.793543   80401 addons.go:510] duration metric: took 1.283145408s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 18:45:21.766367   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:24.252243   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:24.771415   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:24.771443   80401 pod_ready.go:81] duration metric: took 5.026437249s for pod "coredns-5cfdc65f69-r9xns" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:24.771457   80401 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:26.777371   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:28.778629   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:31.277550   80401 pod_ready.go:102] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:45:31.792126   80401 pod_ready.go:92] pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.792154   80401 pod_ready.go:81] duration metric: took 7.020687724s for pod "coredns-5cfdc65f69-tx7nc" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.792168   80401 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.798687   80401 pod_ready.go:92] pod "etcd-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.798708   80401 pod_ready.go:81] duration metric: took 6.534344ms for pod "etcd-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.798717   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.803428   80401 pod_ready.go:92] pod "kube-apiserver-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.803452   80401 pod_ready.go:81] duration metric: took 4.727536ms for pod "kube-apiserver-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.803464   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.815053   80401 pod_ready.go:92] pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.815078   80401 pod_ready.go:81] duration metric: took 11.60679ms for pod "kube-controller-manager-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.815092   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rgp5c" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.824126   80401 pod_ready.go:92] pod "kube-proxy-rgp5c" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:31.824151   80401 pod_ready.go:81] duration metric: took 9.050394ms for pod "kube-proxy-rgp5c" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:31.824163   80401 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:32.176378   80401 pod_ready.go:92] pod "kube-scheduler-no-preload-066175" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:32.176404   80401 pod_ready.go:81] duration metric: took 352.232802ms for pod "kube-scheduler-no-preload-066175" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:32.176414   80401 pod_ready.go:38] duration metric: took 12.437548785s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:32.176430   80401 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:45:32.176492   80401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:45:32.190918   80401 api_server.go:72] duration metric: took 12.680546008s to wait for apiserver process to appear ...
	I0717 18:45:32.190942   80401 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:45:32.190963   80401 api_server.go:253] Checking apiserver healthz at https://192.168.72.216:8443/healthz ...
	I0717 18:45:32.196011   80401 api_server.go:279] https://192.168.72.216:8443/healthz returned 200:
	ok
	I0717 18:45:32.197004   80401 api_server.go:141] control plane version: v1.31.0-beta.0
	I0717 18:45:32.197024   80401 api_server.go:131] duration metric: took 6.075734ms to wait for apiserver health ...
	I0717 18:45:32.197033   80401 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:45:32.379383   80401 system_pods.go:59] 9 kube-system pods found
	I0717 18:45:32.379412   80401 system_pods.go:61] "coredns-5cfdc65f69-r9xns" [29624b73-848d-4a35-96bc-92f9627842fe] Running
	I0717 18:45:32.379416   80401 system_pods.go:61] "coredns-5cfdc65f69-tx7nc" [085ec394-1ca7-4b9b-9b54-b4fdab45bd75] Running
	I0717 18:45:32.379420   80401 system_pods.go:61] "etcd-no-preload-066175" [6086cbd0-137f-428e-8131-4d57b8823912] Running
	I0717 18:45:32.379423   80401 system_pods.go:61] "kube-apiserver-no-preload-066175" [c1913fea-3c1b-4563-ac80-ee1224b23a35] Running
	I0717 18:45:32.379427   80401 system_pods.go:61] "kube-controller-manager-no-preload-066175" [f6dd2ea0-be8f-4c8c-89b0-57fed0d618fd] Running
	I0717 18:45:32.379431   80401 system_pods.go:61] "kube-proxy-rgp5c" [7aaedb8f-b248-43ac-bd49-4f97d26aa1f6] Running
	I0717 18:45:32.379433   80401 system_pods.go:61] "kube-scheduler-no-preload-066175" [406fae53-d382-42c0-90db-ff9c57ccda8b] Running
	I0717 18:45:32.379439   80401 system_pods.go:61] "metrics-server-78fcd8795b-kj29z" [4b99bc9f-b5a7-4e86-b3ba-2607f9840957] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:32.379442   80401 system_pods.go:61] "storage-provisioner" [c9730cf9-c0f1-4afc-94cc-cbd825158d7c] Running
	I0717 18:45:32.379450   80401 system_pods.go:74] duration metric: took 182.412193ms to wait for pod list to return data ...
	I0717 18:45:32.379456   80401 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:45:32.576324   80401 default_sa.go:45] found service account: "default"
	I0717 18:45:32.576348   80401 default_sa.go:55] duration metric: took 196.886306ms for default service account to be created ...
	I0717 18:45:32.576357   80401 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:45:32.780237   80401 system_pods.go:86] 9 kube-system pods found
	I0717 18:45:32.780266   80401 system_pods.go:89] "coredns-5cfdc65f69-r9xns" [29624b73-848d-4a35-96bc-92f9627842fe] Running
	I0717 18:45:32.780272   80401 system_pods.go:89] "coredns-5cfdc65f69-tx7nc" [085ec394-1ca7-4b9b-9b54-b4fdab45bd75] Running
	I0717 18:45:32.780276   80401 system_pods.go:89] "etcd-no-preload-066175" [6086cbd0-137f-428e-8131-4d57b8823912] Running
	I0717 18:45:32.780280   80401 system_pods.go:89] "kube-apiserver-no-preload-066175" [c1913fea-3c1b-4563-ac80-ee1224b23a35] Running
	I0717 18:45:32.780284   80401 system_pods.go:89] "kube-controller-manager-no-preload-066175" [f6dd2ea0-be8f-4c8c-89b0-57fed0d618fd] Running
	I0717 18:45:32.780288   80401 system_pods.go:89] "kube-proxy-rgp5c" [7aaedb8f-b248-43ac-bd49-4f97d26aa1f6] Running
	I0717 18:45:32.780291   80401 system_pods.go:89] "kube-scheduler-no-preload-066175" [406fae53-d382-42c0-90db-ff9c57ccda8b] Running
	I0717 18:45:32.780298   80401 system_pods.go:89] "metrics-server-78fcd8795b-kj29z" [4b99bc9f-b5a7-4e86-b3ba-2607f9840957] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:32.780302   80401 system_pods.go:89] "storage-provisioner" [c9730cf9-c0f1-4afc-94cc-cbd825158d7c] Running
	I0717 18:45:32.780314   80401 system_pods.go:126] duration metric: took 203.948509ms to wait for k8s-apps to be running ...
	I0717 18:45:32.780323   80401 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:45:32.780368   80401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:32.796763   80401 system_svc.go:56] duration metric: took 16.430293ms WaitForService to wait for kubelet
	I0717 18:45:32.796791   80401 kubeadm.go:582] duration metric: took 13.286425468s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:45:32.796809   80401 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:45:32.977271   80401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:45:32.977295   80401 node_conditions.go:123] node cpu capacity is 2
	I0717 18:45:32.977305   80401 node_conditions.go:105] duration metric: took 180.491938ms to run NodePressure ...
	I0717 18:45:32.977315   80401 start.go:241] waiting for startup goroutines ...
	I0717 18:45:32.977322   80401 start.go:246] waiting for cluster config update ...
	I0717 18:45:32.977331   80401 start.go:255] writing updated cluster config ...
	I0717 18:45:32.977544   80401 ssh_runner.go:195] Run: rm -f paused
	I0717 18:45:33.022678   80401 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0717 18:45:33.024737   80401 out.go:177] * Done! kubectl is now configured to use "no-preload-066175" cluster and "default" namespace by default
	I0717 18:45:33.625503   81068 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.034773328s)
	I0717 18:45:33.625584   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:33.640151   81068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:33.650198   81068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:33.659027   81068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:33.659048   81068 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:33.659088   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0717 18:45:33.667607   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:33.667663   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:33.677632   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0717 18:45:33.685631   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:33.685683   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:33.694068   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0717 18:45:33.702840   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:33.702894   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:33.711560   81068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0717 18:45:33.719883   81068 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:33.719928   81068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:33.729898   81068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:33.781672   81068 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:45:33.781776   81068 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:45:33.908046   81068 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:45:33.908199   81068 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:45:33.908366   81068 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:45:34.103926   81068 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:45:34.105872   81068 out.go:204]   - Generating certificates and keys ...
	I0717 18:45:34.105979   81068 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:45:34.106063   81068 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:45:34.106183   81068 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:45:34.106425   81068 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:45:34.106542   81068 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:45:34.106624   81068 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:45:34.106729   81068 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:45:34.106827   81068 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:45:34.106901   81068 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:45:34.106984   81068 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:45:34.107046   81068 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:45:34.107142   81068 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:45:34.390326   81068 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:45:34.442610   81068 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:45:34.692719   81068 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:45:34.777644   81068 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:45:35.101349   81068 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:45:35.102039   81068 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:45:35.104892   81068 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:45:35.106561   81068 out.go:204]   - Booting up control plane ...
	I0717 18:45:35.106689   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:45:35.106775   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:45:35.107611   81068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:45:35.126132   81068 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:45:35.127180   81068 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:45:35.127245   81068 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:45:35.250173   81068 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:45:35.250284   81068 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:45:35.752731   81068 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.583425ms
	I0717 18:45:35.752861   81068 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:45:40.754304   81068 kubeadm.go:310] [api-check] The API server is healthy after 5.001385597s
	I0717 18:45:40.766072   81068 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:45:40.785708   81068 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:45:40.816360   81068 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:45:40.816576   81068 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-022930 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:45:40.830588   81068 kubeadm.go:310] [bootstrap-token] Using token: kxmxsp.4wnt2q9oqhdfdirj
	I0717 18:45:40.831905   81068 out.go:204]   - Configuring RBAC rules ...
	I0717 18:45:40.832031   81068 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:45:40.840754   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:45:40.850104   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:45:40.853748   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:45:40.857341   81068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:45:40.860783   81068 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:45:41.161978   81068 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:45:41.600410   81068 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:45:42.161763   81068 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:45:42.163450   81068 kubeadm.go:310] 
	I0717 18:45:42.163541   81068 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:45:42.163558   81068 kubeadm.go:310] 
	I0717 18:45:42.163661   81068 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:45:42.163673   81068 kubeadm.go:310] 
	I0717 18:45:42.163707   81068 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:45:42.163797   81068 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:45:42.163870   81068 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:45:42.163881   81068 kubeadm.go:310] 
	I0717 18:45:42.163974   81068 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:45:42.163990   81068 kubeadm.go:310] 
	I0717 18:45:42.164058   81068 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:45:42.164077   81068 kubeadm.go:310] 
	I0717 18:45:42.164151   81068 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:45:42.164256   81068 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:45:42.164367   81068 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:45:42.164377   81068 kubeadm.go:310] 
	I0717 18:45:42.164489   81068 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:45:42.164588   81068 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:45:42.164595   81068 kubeadm.go:310] 
	I0717 18:45:42.164683   81068 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token kxmxsp.4wnt2q9oqhdfdirj \
	I0717 18:45:42.164826   81068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:45:42.164862   81068 kubeadm.go:310] 	--control-plane 
	I0717 18:45:42.164870   81068 kubeadm.go:310] 
	I0717 18:45:42.165002   81068 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:45:42.165012   81068 kubeadm.go:310] 
	I0717 18:45:42.165143   81068 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token kxmxsp.4wnt2q9oqhdfdirj \
	I0717 18:45:42.165257   81068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:45:42.166381   81068 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:42.166436   81068 cni.go:84] Creating CNI manager for ""
	I0717 18:45:42.166456   81068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:45:42.168387   81068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:45:42.169678   81068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:45:42.180065   81068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:45:42.197116   81068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:45:42.197192   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:42.197217   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-022930 minikube.k8s.io/updated_at=2024_07_17T18_45_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=default-k8s-diff-port-022930 minikube.k8s.io/primary=true
	I0717 18:45:42.216456   81068 ops.go:34] apiserver oom_adj: -16
	I0717 18:45:42.370148   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:42.870732   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:43.370980   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:43.871201   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:44.370616   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:44.871007   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:45.370377   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:45.870614   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:46.370555   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:46.870513   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:47.370594   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:47.870651   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:48.370620   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:48.870863   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:49.371058   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:49.870188   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:50.370949   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:50.871187   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:51.370764   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:51.871007   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:52.370298   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:52.870917   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:53.371193   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:53.870491   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:54.370274   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:54.871160   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.370879   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.870592   81068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:45:55.948131   81068 kubeadm.go:1113] duration metric: took 13.751000929s to wait for elevateKubeSystemPrivileges
	I0717 18:45:55.948166   81068 kubeadm.go:394] duration metric: took 5m11.453950834s to StartCluster
	I0717 18:45:55.948188   81068 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:55.948265   81068 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:45:55.950777   81068 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:55.951066   81068 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.245 Port:8444 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:45:55.951134   81068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:45:55.951202   81068 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951237   81068 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.951247   81068 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:45:55.951243   81068 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951257   81068 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-022930"
	I0717 18:45:55.951293   81068 config.go:182] Loaded profile config "default-k8s-diff-port-022930": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:45:55.951300   81068 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.951318   81068 addons.go:243] addon metrics-server should already be in state true
	I0717 18:45:55.951319   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.951348   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.951292   81068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-022930"
	I0717 18:45:55.951712   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951732   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951744   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.951754   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.951769   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.951747   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.952885   81068 out.go:177] * Verifying Kubernetes components...
	I0717 18:45:55.954423   81068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:45:55.968158   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43717
	I0717 18:45:55.968547   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41199
	I0717 18:45:55.968768   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.968917   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.969414   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.969436   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.969548   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.969566   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.969814   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.970012   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.970235   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:55.970413   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.970462   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.970809   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44281
	I0717 18:45:55.971165   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.974130   81068 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-022930"
	W0717 18:45:55.974155   81068 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:45:55.974184   81068 host.go:66] Checking if "default-k8s-diff-port-022930" exists ...
	I0717 18:45:55.974549   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.974578   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.981608   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.981640   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.982054   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.982711   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:55.982754   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:55.990665   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40065
	I0717 18:45:55.991297   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.991922   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.991938   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:55.992213   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:55.992346   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:55.993952   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:55.996135   81068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:45:55.997555   81068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:55.997579   81068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:45:55.997602   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:55.998414   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I0717 18:45:55.998963   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:55.999540   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:55.999554   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.000799   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I0717 18:45:56.001014   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.001096   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.001419   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:56.001512   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.001527   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.001755   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.001929   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.002102   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.002141   81068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:45:56.002178   81068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:45:56.002255   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.002686   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:56.002709   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.003047   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.003251   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:56.004660   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:56.006355   81068 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:45:56.007646   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:45:56.007663   81068 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:45:56.007678   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:56.010711   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.011169   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.011220   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.011452   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.011637   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.011806   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.011932   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.021277   81068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0717 18:45:56.021980   81068 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:45:56.022568   81068 main.go:141] libmachine: Using API Version  1
	I0717 18:45:56.022585   81068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:45:56.022949   81068 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:45:56.023127   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetState
	I0717 18:45:56.025023   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .DriverName
	I0717 18:45:56.025443   81068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:56.025458   81068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:45:56.025476   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHHostname
	I0717 18:45:56.028095   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.028450   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:76:ae", ip: ""} in network mk-default-k8s-diff-port-022930: {Iface:virbr2 ExpiryTime:2024-07-17 19:40:29 +0000 UTC Type:0 Mac:52:54:00:5d:76:ae Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:default-k8s-diff-port-022930 Clientid:01:52:54:00:5d:76:ae}
	I0717 18:45:56.028477   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | domain default-k8s-diff-port-022930 has defined IP address 192.168.50.245 and MAC address 52:54:00:5d:76:ae in network mk-default-k8s-diff-port-022930
	I0717 18:45:56.028666   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHPort
	I0717 18:45:56.028853   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHKeyPath
	I0717 18:45:56.029081   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .GetSSHUsername
	I0717 18:45:56.029226   81068 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/default-k8s-diff-port-022930/id_rsa Username:docker}
	I0717 18:45:56.173482   81068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:45:56.194585   81068 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-022930" to be "Ready" ...
	I0717 18:45:56.203594   81068 node_ready.go:49] node "default-k8s-diff-port-022930" has status "Ready":"True"
	I0717 18:45:56.203614   81068 node_ready.go:38] duration metric: took 8.994875ms for node "default-k8s-diff-port-022930" to be "Ready" ...
	I0717 18:45:56.203623   81068 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:56.207834   81068 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.212424   81068 pod_ready.go:92] pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.212444   81068 pod_ready.go:81] duration metric: took 4.58857ms for pod "etcd-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.212454   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.217013   81068 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.217031   81068 pod_ready.go:81] duration metric: took 4.569971ms for pod "kube-apiserver-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.217040   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.221441   81068 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:56.221458   81068 pod_ready.go:81] duration metric: took 4.411121ms for pod "kube-controller-manager-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.221470   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hnb5v" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:56.268740   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:45:56.268765   81068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:45:56.290194   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:45:56.310957   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:45:56.310981   81068 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:45:56.352789   81068 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:56.352821   81068 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:45:56.378402   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:45:56.379632   81068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:45:56.518737   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.518766   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.519075   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.519097   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:56.519108   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.519117   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.519340   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.519352   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.519383   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.519426   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:56.529290   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:56.529317   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:56.529618   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:56.529680   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:56.529697   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.386401   81068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.007961919s)
	I0717 18:45:57.386463   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.386480   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.386925   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.386980   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.386999   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.387017   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.386958   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.387283   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.387304   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731240   81068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.351571451s)
	I0717 18:45:57.731287   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.731300   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.731616   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.731650   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.731664   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731672   81068 main.go:141] libmachine: Making call to close driver server
	I0717 18:45:57.731685   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) Calling .Close
	I0717 18:45:57.731905   81068 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:45:57.731930   81068 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:45:57.731949   81068 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-022930"
	I0717 18:45:57.731960   81068 main.go:141] libmachine: (default-k8s-diff-port-022930) DBG | Closing plugin on server side
	I0717 18:45:57.734601   81068 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0717 18:45:53.693038   80180 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.028164403s)
	I0717 18:45:53.693099   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:53.709020   80180 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:45:53.718790   80180 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:45:53.728384   80180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:45:53.728405   80180 kubeadm.go:157] found existing configuration files:
	
	I0717 18:45:53.728444   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:45:53.737315   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:45:53.737384   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:45:53.746336   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:45:53.754297   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:45:53.754347   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:45:53.763252   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:45:53.772186   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:45:53.772229   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:45:53.780829   80180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:45:53.788899   80180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:45:53.788955   80180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:45:53.797324   80180 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:45:53.982580   80180 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:45:57.735769   81068 addons.go:510] duration metric: took 1.784634456s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0717 18:45:57.742312   81068 pod_ready.go:92] pod "kube-proxy-hnb5v" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:57.742333   81068 pod_ready.go:81] duration metric: took 1.520854667s for pod "kube-proxy-hnb5v" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.742344   81068 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.809858   81068 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace has status "Ready":"True"
	I0717 18:45:57.809885   81068 pod_ready.go:81] duration metric: took 67.527182ms for pod "kube-scheduler-default-k8s-diff-port-022930" in "kube-system" namespace to be "Ready" ...
	I0717 18:45:57.809896   81068 pod_ready.go:38] duration metric: took 1.606263576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:45:57.809914   81068 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:45:57.809972   81068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:45:57.847337   81068 api_server.go:72] duration metric: took 1.896234247s to wait for apiserver process to appear ...
	I0717 18:45:57.847366   81068 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:45:57.847391   81068 api_server.go:253] Checking apiserver healthz at https://192.168.50.245:8444/healthz ...
	I0717 18:45:57.853537   81068 api_server.go:279] https://192.168.50.245:8444/healthz returned 200:
	ok
	I0717 18:45:57.856587   81068 api_server.go:141] control plane version: v1.30.2
	I0717 18:45:57.856661   81068 api_server.go:131] duration metric: took 9.286402ms to wait for apiserver health ...
	I0717 18:45:57.856684   81068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:45:58.002336   81068 system_pods.go:59] 9 kube-system pods found
	I0717 18:45:58.002374   81068 system_pods.go:61] "coredns-7db6d8ff4d-fp4tg" [dc66092c-9183-4630-93cc-6ec4aa59a928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.002383   81068 system_pods.go:61] "coredns-7db6d8ff4d-jn64r" [35cbef26-555a-4693-afac-c739d9238a04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.002396   81068 system_pods.go:61] "etcd-default-k8s-diff-port-022930" [f83fd844-0ede-4638-b8c6-2ecdecbf4345] Running
	I0717 18:45:58.002402   81068 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-022930" [19fa3a0a-ab56-4163-b39f-2b12ce65d490] Running
	I0717 18:45:58.002408   81068 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-022930" [0037b401-ce9b-41f3-89de-47608a46a228] Running
	I0717 18:45:58.002414   81068 system_pods.go:61] "kube-proxy-hnb5v" [b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d] Running
	I0717 18:45:58.002418   81068 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-022930" [21fa54d0-9d90-492c-b90c-e5070dd2e350] Running
	I0717 18:45:58.002425   81068 system_pods.go:61] "metrics-server-569cc877fc-pfmwt" [39616dfc-215e-4af5-90f7-12fc28304494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:58.002435   81068 system_pods.go:61] "storage-provisioner" [d9b11611-2008-4a15-a661-62809bd1d4c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:58.002452   81068 system_pods.go:74] duration metric: took 145.752129ms to wait for pod list to return data ...
	I0717 18:45:58.002463   81068 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:45:58.197223   81068 default_sa.go:45] found service account: "default"
	I0717 18:45:58.197250   81068 default_sa.go:55] duration metric: took 194.774408ms for default service account to be created ...
	I0717 18:45:58.197260   81068 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:45:58.401825   81068 system_pods.go:86] 9 kube-system pods found
	I0717 18:45:58.401878   81068 system_pods.go:89] "coredns-7db6d8ff4d-fp4tg" [dc66092c-9183-4630-93cc-6ec4aa59a928] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.401891   81068 system_pods.go:89] "coredns-7db6d8ff4d-jn64r" [35cbef26-555a-4693-afac-c739d9238a04] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 18:45:58.401904   81068 system_pods.go:89] "etcd-default-k8s-diff-port-022930" [f83fd844-0ede-4638-b8c6-2ecdecbf4345] Running
	I0717 18:45:58.401917   81068 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-022930" [19fa3a0a-ab56-4163-b39f-2b12ce65d490] Running
	I0717 18:45:58.401927   81068 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-022930" [0037b401-ce9b-41f3-89de-47608a46a228] Running
	I0717 18:45:58.401935   81068 system_pods.go:89] "kube-proxy-hnb5v" [b3b7e71d-bb6e-4b1e-b3e8-e70c6ef4dc0d] Running
	I0717 18:45:58.401940   81068 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-022930" [21fa54d0-9d90-492c-b90c-e5070dd2e350] Running
	I0717 18:45:58.401948   81068 system_pods.go:89] "metrics-server-569cc877fc-pfmwt" [39616dfc-215e-4af5-90f7-12fc28304494] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:45:58.401956   81068 system_pods.go:89] "storage-provisioner" [d9b11611-2008-4a15-a661-62809bd1d4c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 18:45:58.401965   81068 system_pods.go:126] duration metric: took 204.700297ms to wait for k8s-apps to be running ...
	I0717 18:45:58.401975   81068 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:45:58.402024   81068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:45:58.416020   81068 system_svc.go:56] duration metric: took 14.023536ms WaitForService to wait for kubelet
	I0717 18:45:58.416056   81068 kubeadm.go:582] duration metric: took 2.464957357s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:45:58.416079   81068 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:45:58.598829   81068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:45:58.598863   81068 node_conditions.go:123] node cpu capacity is 2
	I0717 18:45:58.598876   81068 node_conditions.go:105] duration metric: took 182.791383ms to run NodePressure ...
	I0717 18:45:58.598891   81068 start.go:241] waiting for startup goroutines ...
	I0717 18:45:58.598899   81068 start.go:246] waiting for cluster config update ...
	I0717 18:45:58.598912   81068 start.go:255] writing updated cluster config ...
	I0717 18:45:58.599267   81068 ssh_runner.go:195] Run: rm -f paused
	I0717 18:45:58.661380   81068 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:45:58.663085   81068 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-022930" cluster and "default" namespace by default
	I0717 18:46:02.558673   80180 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 18:46:02.558766   80180 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:02.558842   80180 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:02.558980   80180 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:02.559118   80180 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:02.559210   80180 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:02.561934   80180 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:02.562036   80180 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:02.562108   80180 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:02.562191   80180 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:02.562290   80180 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:02.562393   80180 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:02.562478   80180 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:02.562565   80180 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:02.562643   80180 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:02.562711   80180 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:02.562826   80180 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:02.562886   80180 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:02.562958   80180 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:02.563005   80180 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:02.563081   80180 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 18:46:02.563136   80180 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:02.563210   80180 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:02.563293   80180 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:02.563405   80180 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:02.563468   80180 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:02.564989   80180 out.go:204]   - Booting up control plane ...
	I0717 18:46:02.565092   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:02.565181   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:02.565270   80180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:02.565400   80180 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:02.565526   80180 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:02.565597   80180 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:02.565783   80180 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 18:46:02.565880   80180 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 18:46:02.565959   80180 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.323304ms
	I0717 18:46:02.566046   80180 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 18:46:02.566105   80180 kubeadm.go:310] [api-check] The API server is healthy after 5.002038309s
	I0717 18:46:02.566206   80180 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:46:02.566307   80180 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:46:02.566359   80180 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:46:02.566525   80180 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-527415 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:46:02.566575   80180 kubeadm.go:310] [bootstrap-token] Using token: xeax16.7z40teb0jswemrgg
	I0717 18:46:02.568038   80180 out.go:204]   - Configuring RBAC rules ...
	I0717 18:46:02.568120   80180 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:46:02.568194   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:46:02.568314   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:46:02.568449   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:46:02.568553   80180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:46:02.568660   80180 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:46:02.568807   80180 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:46:02.568877   80180 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 18:46:02.568926   80180 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 18:46:02.568936   80180 kubeadm.go:310] 
	I0717 18:46:02.569032   80180 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 18:46:02.569044   80180 kubeadm.go:310] 
	I0717 18:46:02.569108   80180 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 18:46:02.569114   80180 kubeadm.go:310] 
	I0717 18:46:02.569157   80180 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 18:46:02.569249   80180 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:46:02.569326   80180 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:46:02.569346   80180 kubeadm.go:310] 
	I0717 18:46:02.569432   80180 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 18:46:02.569442   80180 kubeadm.go:310] 
	I0717 18:46:02.569511   80180 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:46:02.569519   80180 kubeadm.go:310] 
	I0717 18:46:02.569599   80180 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 18:46:02.569695   80180 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:46:02.569790   80180 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:46:02.569797   80180 kubeadm.go:310] 
	I0717 18:46:02.569905   80180 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:46:02.569985   80180 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 18:46:02.569998   80180 kubeadm.go:310] 
	I0717 18:46:02.570096   80180 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xeax16.7z40teb0jswemrgg \
	I0717 18:46:02.570234   80180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 \
	I0717 18:46:02.570264   80180 kubeadm.go:310] 	--control-plane 
	I0717 18:46:02.570273   80180 kubeadm.go:310] 
	I0717 18:46:02.570348   80180 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:46:02.570355   80180 kubeadm.go:310] 
	I0717 18:46:02.570429   80180 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xeax16.7z40teb0jswemrgg \
	I0717 18:46:02.570555   80180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3751818c64ee5a0af077a7d260c109f8a4e12a6f6bf2dcdb2dcbe26e1374c7b6 
	I0717 18:46:02.570569   80180 cni.go:84] Creating CNI manager for ""
	I0717 18:46:02.570578   80180 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 18:46:02.571934   80180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 18:46:02.573034   80180 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 18:46:02.583253   80180 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 18:46:02.603658   80180 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:46:02.603745   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-527415 minikube.k8s.io/updated_at=2024_07_17T18_46_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86 minikube.k8s.io/name=embed-certs-527415 minikube.k8s.io/primary=true
	I0717 18:46:02.603745   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:02.621414   80180 ops.go:34] apiserver oom_adj: -16
	I0717 18:46:02.792226   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:03.292632   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:03.792270   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:04.293220   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:04.793011   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:05.292596   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:05.793043   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:06.293286   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:06.793069   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:07.292569   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:07.792604   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:08.293028   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:08.792259   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:09.292273   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:09.792672   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.293080   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.792442   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.292894   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.792436   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.292411   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.792327   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.292909   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.792878   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.293188   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.793038   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.292453   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.792367   80180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.898487   80180 kubeadm.go:1113] duration metric: took 13.294815165s to wait for elevateKubeSystemPrivileges
	I0717 18:46:15.898528   80180 kubeadm.go:394] duration metric: took 5m13.234208822s to StartCluster
	I0717 18:46:15.898546   80180 settings.go:142] acquiring lock: {Name:mk9cd301a49888b6dce40136fa939a3e1568d41d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:15.898626   80180 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:46:15.900239   80180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-14386/kubeconfig: {Name:mk88d8abe29d525d6765dc5f6ab6e2170d59aa2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:15.900462   80180 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.90 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:46:15.900564   80180 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 18:46:15.900648   80180 config.go:182] Loaded profile config "embed-certs-527415": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:46:15.900655   80180 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-527415"
	I0717 18:46:15.900667   80180 addons.go:69] Setting default-storageclass=true in profile "embed-certs-527415"
	I0717 18:46:15.900691   80180 addons.go:69] Setting metrics-server=true in profile "embed-certs-527415"
	I0717 18:46:15.900704   80180 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-527415"
	I0717 18:46:15.900709   80180 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-527415"
	I0717 18:46:15.900714   80180 addons.go:234] Setting addon metrics-server=true in "embed-certs-527415"
	W0717 18:46:15.900747   80180 addons.go:243] addon metrics-server should already be in state true
	I0717 18:46:15.900777   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	W0717 18:46:15.900715   80180 addons.go:243] addon storage-provisioner should already be in state true
	I0717 18:46:15.900852   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:46:15.901106   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901150   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.901152   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901183   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.901264   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.901298   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.902177   80180 out.go:177] * Verifying Kubernetes components...
	I0717 18:46:15.903698   80180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:46:15.918294   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40333
	I0717 18:46:15.918295   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I0717 18:46:15.918859   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.918909   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.919433   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.919455   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.919478   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40379
	I0717 18:46:15.919548   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.919572   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.919788   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.919875   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.919883   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.920316   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.920323   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.920338   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.920345   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.920387   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.920425   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.920695   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.920890   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.924623   80180 addons.go:234] Setting addon default-storageclass=true in "embed-certs-527415"
	W0717 18:46:15.924644   80180 addons.go:243] addon default-storageclass should already be in state true
	I0717 18:46:15.924672   80180 host.go:66] Checking if "embed-certs-527415" exists ...
	I0717 18:46:15.925801   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.925830   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.936020   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
	I0717 18:46:15.936280   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0717 18:46:15.936365   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.936674   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.937144   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.937164   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.937229   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.937239   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.937565   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.937587   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.937770   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.937872   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.939671   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.939856   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.941929   80180 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:46:15.941934   80180 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 18:46:15.943632   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:46:15.943650   80180 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:46:15.943668   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.943715   80180 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:15.943724   80180 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:46:15.943737   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.946283   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33675
	I0717 18:46:15.946815   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.947230   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.947240   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.947272   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.947953   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.947987   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948001   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.948179   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.948223   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948248   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.948388   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.948604   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:15.948627   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.948653   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.948832   80180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:46:15.948870   80180 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:46:15.948895   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.949086   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.949307   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.949454   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:15.969385   80180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35001
	I0717 18:46:15.969789   80180 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:46:15.970221   80180 main.go:141] libmachine: Using API Version  1
	I0717 18:46:15.970241   80180 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:46:15.970756   80180 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:46:15.970963   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetState
	I0717 18:46:15.972631   80180 main.go:141] libmachine: (embed-certs-527415) Calling .DriverName
	I0717 18:46:15.972849   80180 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:15.972868   80180 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:46:15.972889   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHHostname
	I0717 18:46:15.975680   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.976123   80180 main.go:141] libmachine: (embed-certs-527415) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:52:9a", ip: ""} in network mk-embed-certs-527415: {Iface:virbr4 ExpiryTime:2024-07-17 19:31:46 +0000 UTC Type:0 Mac:52:54:00:4e:52:9a Iaid: IPaddr:192.168.61.90 Prefix:24 Hostname:embed-certs-527415 Clientid:01:52:54:00:4e:52:9a}
	I0717 18:46:15.976187   80180 main.go:141] libmachine: (embed-certs-527415) DBG | domain embed-certs-527415 has defined IP address 192.168.61.90 and MAC address 52:54:00:4e:52:9a in network mk-embed-certs-527415
	I0717 18:46:15.976320   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHPort
	I0717 18:46:15.976496   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHKeyPath
	I0717 18:46:15.976657   80180 main.go:141] libmachine: (embed-certs-527415) Calling .GetSSHUsername
	I0717 18:46:15.976748   80180 sshutil.go:53] new ssh client: &{IP:192.168.61.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/embed-certs-527415/id_rsa Username:docker}
	I0717 18:46:16.134605   80180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 18:46:16.206139   80180 node_ready.go:35] waiting up to 6m0s for node "embed-certs-527415" to be "Ready" ...
	I0717 18:46:16.214532   80180 node_ready.go:49] node "embed-certs-527415" has status "Ready":"True"
	I0717 18:46:16.214550   80180 node_ready.go:38] duration metric: took 8.382109ms for node "embed-certs-527415" to be "Ready" ...
	I0717 18:46:16.214568   80180 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:46:16.223573   80180 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:16.254146   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:46:16.254166   80180 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 18:46:16.293257   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:16.312304   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:16.334927   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:46:16.334949   80180 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:46:16.404696   80180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:16.404723   80180 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:46:16.462835   80180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:17.281062   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281088   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281062   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281157   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281395   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281402   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281415   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281415   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281424   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281427   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.281432   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281436   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.281676   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.281678   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281700   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.281705   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.281722   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.281732   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.300264   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.300294   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.300592   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.300643   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.300672   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.489477   80180 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026593042s)
	I0717 18:46:17.489520   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.489534   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.490020   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.490047   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.490055   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.490068   80180 main.go:141] libmachine: Making call to close driver server
	I0717 18:46:17.490077   80180 main.go:141] libmachine: (embed-certs-527415) Calling .Close
	I0717 18:46:17.490344   80180 main.go:141] libmachine: Successfully made call to close driver server
	I0717 18:46:17.490373   80180 main.go:141] libmachine: Making call to close connection to plugin binary
	I0717 18:46:17.490384   80180 addons.go:475] Verifying addon metrics-server=true in "embed-certs-527415"
	I0717 18:46:17.490397   80180 main.go:141] libmachine: (embed-certs-527415) DBG | Closing plugin on server side
	I0717 18:46:17.492257   80180 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0717 18:46:17.493487   80180 addons.go:510] duration metric: took 1.592928152s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0717 18:46:18.230569   80180 pod_ready.go:92] pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.230592   80180 pod_ready.go:81] duration metric: took 2.006995421s for pod "coredns-7db6d8ff4d-2zt8k" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.230603   80180 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.235298   80180 pod_ready.go:92] pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.235317   80180 pod_ready.go:81] duration metric: took 4.707534ms for pod "coredns-7db6d8ff4d-f64kh" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.235327   80180 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.238998   80180 pod_ready.go:92] pod "etcd-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.239015   80180 pod_ready.go:81] duration metric: took 3.681191ms for pod "etcd-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.239023   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.242949   80180 pod_ready.go:92] pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.242967   80180 pod_ready.go:81] duration metric: took 3.937614ms for pod "kube-apiserver-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.242977   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.246567   80180 pod_ready.go:92] pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.246580   80180 pod_ready.go:81] duration metric: took 3.597434ms for pod "kube-controller-manager-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.246588   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m52fq" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.628607   80180 pod_ready.go:92] pod "kube-proxy-m52fq" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:18.628636   80180 pod_ready.go:81] duration metric: took 382.042151ms for pod "kube-proxy-m52fq" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:18.628650   80180 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:19.028536   80180 pod_ready.go:92] pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:19.028558   80180 pod_ready.go:81] duration metric: took 399.900565ms for pod "kube-scheduler-embed-certs-527415" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:19.028565   80180 pod_ready.go:38] duration metric: took 2.813989212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:46:19.028578   80180 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:46:19.028630   80180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:46:19.044787   80180 api_server.go:72] duration metric: took 3.144295616s to wait for apiserver process to appear ...
	I0717 18:46:19.044810   80180 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:46:19.044825   80180 api_server.go:253] Checking apiserver healthz at https://192.168.61.90:8443/healthz ...
	I0717 18:46:19.051106   80180 api_server.go:279] https://192.168.61.90:8443/healthz returned 200:
	ok
	I0717 18:46:19.052094   80180 api_server.go:141] control plane version: v1.30.2
	I0717 18:46:19.052111   80180 api_server.go:131] duration metric: took 7.296406ms to wait for apiserver health ...
	I0717 18:46:19.052117   80180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:46:19.231877   80180 system_pods.go:59] 9 kube-system pods found
	I0717 18:46:19.231905   80180 system_pods.go:61] "coredns-7db6d8ff4d-2zt8k" [5e2e90bb-5721-4ca8-8177-77e6b686175a] Running
	I0717 18:46:19.231912   80180 system_pods.go:61] "coredns-7db6d8ff4d-f64kh" [f0de6ef4-1402-44b2-81f3-3f234a72d151] Running
	I0717 18:46:19.231916   80180 system_pods.go:61] "etcd-embed-certs-527415" [79d210fe-c4d9-476f-ab78-cce3b98c1c95] Running
	I0717 18:46:19.231921   80180 system_pods.go:61] "kube-apiserver-embed-certs-527415" [8b43654e-7127-4e43-91e6-1239bf66661d] Running
	I0717 18:46:19.231925   80180 system_pods.go:61] "kube-controller-manager-embed-certs-527415" [55da9f4c-566b-4f82-a700-236d117bd9a4] Running
	I0717 18:46:19.231929   80180 system_pods.go:61] "kube-proxy-m52fq" [40f99883-b343-43b3-8f94-4b45b379a17b] Running
	I0717 18:46:19.231934   80180 system_pods.go:61] "kube-scheduler-embed-certs-527415" [e6031b0b-5aa6-4827-b41a-a422d05c0b9a] Running
	I0717 18:46:19.231942   80180 system_pods.go:61] "metrics-server-569cc877fc-hvxtg" [05a18f70-4284-4315-892e-2850ac8b5050] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:46:19.231947   80180 system_pods.go:61] "storage-provisioner" [5f473bbe-0727-4f25-ba39-4ed322767465] Running
	I0717 18:46:19.231957   80180 system_pods.go:74] duration metric: took 179.833729ms to wait for pod list to return data ...
	I0717 18:46:19.231966   80180 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:46:19.427972   80180 default_sa.go:45] found service account: "default"
	I0717 18:46:19.427994   80180 default_sa.go:55] duration metric: took 196.021611ms for default service account to be created ...
	I0717 18:46:19.428002   80180 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:46:19.630730   80180 system_pods.go:86] 9 kube-system pods found
	I0717 18:46:19.630755   80180 system_pods.go:89] "coredns-7db6d8ff4d-2zt8k" [5e2e90bb-5721-4ca8-8177-77e6b686175a] Running
	I0717 18:46:19.630760   80180 system_pods.go:89] "coredns-7db6d8ff4d-f64kh" [f0de6ef4-1402-44b2-81f3-3f234a72d151] Running
	I0717 18:46:19.630765   80180 system_pods.go:89] "etcd-embed-certs-527415" [79d210fe-c4d9-476f-ab78-cce3b98c1c95] Running
	I0717 18:46:19.630769   80180 system_pods.go:89] "kube-apiserver-embed-certs-527415" [8b43654e-7127-4e43-91e6-1239bf66661d] Running
	I0717 18:46:19.630774   80180 system_pods.go:89] "kube-controller-manager-embed-certs-527415" [55da9f4c-566b-4f82-a700-236d117bd9a4] Running
	I0717 18:46:19.630778   80180 system_pods.go:89] "kube-proxy-m52fq" [40f99883-b343-43b3-8f94-4b45b379a17b] Running
	I0717 18:46:19.630782   80180 system_pods.go:89] "kube-scheduler-embed-certs-527415" [e6031b0b-5aa6-4827-b41a-a422d05c0b9a] Running
	I0717 18:46:19.630788   80180 system_pods.go:89] "metrics-server-569cc877fc-hvxtg" [05a18f70-4284-4315-892e-2850ac8b5050] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 18:46:19.630792   80180 system_pods.go:89] "storage-provisioner" [5f473bbe-0727-4f25-ba39-4ed322767465] Running
	I0717 18:46:19.630800   80180 system_pods.go:126] duration metric: took 202.793522ms to wait for k8s-apps to be running ...
	I0717 18:46:19.630806   80180 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:46:19.630849   80180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:19.646111   80180 system_svc.go:56] duration metric: took 15.296964ms WaitForService to wait for kubelet
	I0717 18:46:19.646133   80180 kubeadm.go:582] duration metric: took 3.745647205s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:46:19.646149   80180 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:46:19.828333   80180 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0717 18:46:19.828356   80180 node_conditions.go:123] node cpu capacity is 2
	I0717 18:46:19.828368   80180 node_conditions.go:105] duration metric: took 182.213813ms to run NodePressure ...
	I0717 18:46:19.828381   80180 start.go:241] waiting for startup goroutines ...
	I0717 18:46:19.828389   80180 start.go:246] waiting for cluster config update ...
	I0717 18:46:19.828401   80180 start.go:255] writing updated cluster config ...
	I0717 18:46:19.828690   80180 ssh_runner.go:195] Run: rm -f paused
	I0717 18:46:19.877774   80180 start.go:600] kubectl: 1.30.3, cluster: 1.30.2 (minor skew: 0)
	I0717 18:46:19.879769   80180 out.go:177] * Done! kubectl is now configured to use "embed-certs-527415" cluster and "default" namespace by default
	I0717 18:46:33.124646   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:46:33.124790   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:46:33.126245   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.126307   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.126409   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.126547   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.126673   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:33.126734   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:33.128541   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:33.128626   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:33.128707   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:33.128817   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:33.128901   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:33.129018   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:33.129091   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:33.129172   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:33.129249   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:33.129339   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:33.129408   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:33.129444   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:33.129532   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:33.129603   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:33.129665   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:33.129765   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:33.129812   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:33.129929   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:33.130037   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:33.130093   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:33.130177   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:33.131546   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:33.131652   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:33.131750   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:33.131858   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:33.131939   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:33.132085   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:46:33.132133   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:46:33.132189   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132355   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132419   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132585   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132657   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.132839   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.132900   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133143   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133248   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:46:33.133452   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:46:33.133460   80857 kubeadm.go:310] 
	I0717 18:46:33.133494   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:46:33.133529   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:46:33.133535   80857 kubeadm.go:310] 
	I0717 18:46:33.133564   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:46:33.133599   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:46:33.133727   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:46:33.133752   80857 kubeadm.go:310] 
	I0717 18:46:33.133905   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:46:33.133947   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:46:33.134002   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:46:33.134012   80857 kubeadm.go:310] 
	I0717 18:46:33.134116   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:46:33.134186   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:46:33.134193   80857 kubeadm.go:310] 
	I0717 18:46:33.134290   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:46:33.134367   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:46:33.134431   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:46:33.134491   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:46:33.134533   80857 kubeadm.go:310] 
	W0717 18:46:33.134615   80857 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 18:46:33.134669   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0717 18:46:33.590879   80857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:33.605393   80857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:46:33.614382   80857 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:46:33.614405   80857 kubeadm.go:157] found existing configuration files:
	
	I0717 18:46:33.614450   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 18:46:33.622849   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 18:46:33.622905   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 18:46:33.631852   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 18:46:33.640160   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 18:46:33.640211   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 18:46:33.648774   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.656740   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 18:46:33.656796   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 18:46:33.665799   80857 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 18:46:33.674492   80857 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 18:46:33.674547   80857 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 18:46:33.683627   80857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0717 18:46:33.746405   80857 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0717 18:46:33.746472   80857 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 18:46:33.881152   80857 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:33.881297   80857 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:33.881443   80857 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:34.053199   80857 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:34.055757   80857 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:34.055843   80857 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 18:46:34.055918   80857 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:34.056030   80857 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 18:46:34.056129   80857 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0717 18:46:34.056232   80857 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 18:46:34.056336   80857 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0717 18:46:34.056431   80857 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0717 18:46:34.056524   80857 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0717 18:46:34.056656   80857 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 18:46:34.056764   80857 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 18:46:34.056824   80857 kubeadm.go:310] [certs] Using the existing "sa" key
	I0717 18:46:34.056900   80857 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:34.276456   80857 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:34.491418   80857 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:34.702265   80857 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:34.874511   80857 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:34.895484   80857 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:34.896451   80857 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:34.896536   80857 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 18:46:35.040208   80857 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:35.042291   80857 out.go:204]   - Booting up control plane ...
	I0717 18:46:35.042437   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:35.042565   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:35.044391   80857 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:35.046206   80857 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:35.050843   80857 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:47:15.053070   80857 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0717 18:47:15.053416   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:15.053586   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:20.053963   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:20.054207   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:30.054801   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:30.055011   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:47:50.055270   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:47:50.055465   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.053919   80857 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 18:48:30.054133   80857 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 18:48:30.054148   80857 kubeadm.go:310] 
	I0717 18:48:30.054231   80857 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0717 18:48:30.054300   80857 kubeadm.go:310] 		timed out waiting for the condition
	I0717 18:48:30.054326   80857 kubeadm.go:310] 
	I0717 18:48:30.054386   80857 kubeadm.go:310] 	This error is likely caused by:
	I0717 18:48:30.054443   80857 kubeadm.go:310] 		- The kubelet is not running
	I0717 18:48:30.054581   80857 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 18:48:30.054593   80857 kubeadm.go:310] 
	I0717 18:48:30.054715   80857 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 18:48:30.054761   80857 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0717 18:48:30.054810   80857 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0717 18:48:30.054818   80857 kubeadm.go:310] 
	I0717 18:48:30.054970   80857 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 18:48:30.055069   80857 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 18:48:30.055081   80857 kubeadm.go:310] 
	I0717 18:48:30.055236   80857 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0717 18:48:30.055332   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0717 18:48:30.055396   80857 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0717 18:48:30.055457   80857 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0717 18:48:30.055483   80857 kubeadm.go:310] 
	I0717 18:48:30.056139   80857 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:48:30.056246   80857 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 18:48:30.056338   80857 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0717 18:48:30.056413   80857 kubeadm.go:394] duration metric: took 8m2.908780359s to StartCluster
	I0717 18:48:30.056461   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0717 18:48:30.056524   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0717 18:48:30.102640   80857 cri.go:89] found id: ""
	I0717 18:48:30.102662   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.102669   80857 logs.go:278] No container was found matching "kube-apiserver"
	I0717 18:48:30.102674   80857 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0717 18:48:30.102724   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0717 18:48:30.142516   80857 cri.go:89] found id: ""
	I0717 18:48:30.142548   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.142559   80857 logs.go:278] No container was found matching "etcd"
	I0717 18:48:30.142567   80857 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0717 18:48:30.142630   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0717 18:48:30.178558   80857 cri.go:89] found id: ""
	I0717 18:48:30.178589   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.178598   80857 logs.go:278] No container was found matching "coredns"
	I0717 18:48:30.178604   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0717 18:48:30.178677   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0717 18:48:30.211146   80857 cri.go:89] found id: ""
	I0717 18:48:30.211177   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.211186   80857 logs.go:278] No container was found matching "kube-scheduler"
	I0717 18:48:30.211192   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0717 18:48:30.211242   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0717 18:48:30.244287   80857 cri.go:89] found id: ""
	I0717 18:48:30.244308   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.244314   80857 logs.go:278] No container was found matching "kube-proxy"
	I0717 18:48:30.244319   80857 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0717 18:48:30.244364   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0717 18:48:30.274547   80857 cri.go:89] found id: ""
	I0717 18:48:30.274577   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.274587   80857 logs.go:278] No container was found matching "kube-controller-manager"
	I0717 18:48:30.274594   80857 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0717 18:48:30.274660   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0717 18:48:30.306796   80857 cri.go:89] found id: ""
	I0717 18:48:30.306825   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.306835   80857 logs.go:278] No container was found matching "kindnet"
	I0717 18:48:30.306842   80857 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0717 18:48:30.306903   80857 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0717 18:48:30.341938   80857 cri.go:89] found id: ""
	I0717 18:48:30.341962   80857 logs.go:276] 0 containers: []
	W0717 18:48:30.341972   80857 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0717 18:48:30.341982   80857 logs.go:123] Gathering logs for kubelet ...
	I0717 18:48:30.341997   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 18:48:30.407881   80857 logs.go:123] Gathering logs for dmesg ...
	I0717 18:48:30.407925   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 18:48:30.430885   80857 logs.go:123] Gathering logs for describe nodes ...
	I0717 18:48:30.430913   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 18:48:30.525366   80857 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 18:48:30.525394   80857 logs.go:123] Gathering logs for CRI-O ...
	I0717 18:48:30.525408   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0717 18:48:30.639556   80857 logs.go:123] Gathering logs for container status ...
	I0717 18:48:30.639588   80857 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 18:48:30.677493   80857 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 18:48:30.677544   80857 out.go:239] * 
	W0717 18:48:30.677604   80857 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.677636   80857 out.go:239] * 
	W0717 18:48:30.678483   80857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 18:48:30.681792   80857 out.go:177] 
	W0717 18:48:30.682976   80857 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 18:48:30.683034   80857 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 18:48:30.683050   80857 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 18:48:30.684325   80857 out.go:177] 
	
	
	==> CRI-O <==
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.356894774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242841356861548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbb98b6e-94ba-4329-b5f4-aa0079f1feb6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.357323883Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9478d6c-9113-4ad5-ae71-f67dece51598 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.357366082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9478d6c-9113-4ad5-ae71-f67dece51598 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.357395208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d9478d6c-9113-4ad5-ae71-f67dece51598 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.385312519Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c04bf9c6-b095-4155-9088-559a662fb257 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.385403238Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c04bf9c6-b095-4155-9088-559a662fb257 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.386228528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19da1e0f-10b9-4657-bf78-72257f9567f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.386691884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242841386657129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19da1e0f-10b9-4657-bf78-72257f9567f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.387223483Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b7c7449-4527-4451-a5f8-2810377915b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.387297182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b7c7449-4527-4451-a5f8-2810377915b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.387334297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1b7c7449-4527-4451-a5f8-2810377915b7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.417373403Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08d3271f-5d82-4756-b1c8-ea4aec5178fb name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.417460606Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08d3271f-5d82-4756-b1c8-ea4aec5178fb name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.418439654Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91941950-de36-4870-99bd-dc96e391559c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.418884623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242841418861990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91941950-de36-4870-99bd-dc96e391559c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.419335049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ddb24315-b6ec-4aff-9ee8-5174acf48d1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.419406132Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ddb24315-b6ec-4aff-9ee8-5174acf48d1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.419443620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ddb24315-b6ec-4aff-9ee8-5174acf48d1c name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.450370076Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=876561d0-7484-49bb-8c22-91d6519c0755 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.450460554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=876561d0-7484-49bb-8c22-91d6519c0755 name=/runtime.v1.RuntimeService/Version
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.451643459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7953df90-b7f3-4fc6-9c58-3634a9b69abb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.452073400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721242841452044786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7953df90-b7f3-4fc6-9c58-3634a9b69abb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.452538848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01886294-f32a-4fad-a039-4934828a92c9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.452607355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01886294-f32a-4fad-a039-4934828a92c9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 17 19:00:41 old-k8s-version-019549 crio[648]: time="2024-07-17 19:00:41.452660106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=01886294-f32a-4fad-a039-4934828a92c9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul17 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051628] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040768] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.517042] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.721932] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.548665] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.018518] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.058706] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069719] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.203391] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.148278] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.237346] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.350758] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.060103] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.283579] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[ +13.881143] kauditd_printk_skb: 46 callbacks suppressed
	[Jul17 18:44] systemd-fstab-generator[5065]: Ignoring "noauto" option for root device
	[Jul17 18:46] systemd-fstab-generator[5343]: Ignoring "noauto" option for root device
	[  +0.061949] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:00:41 up 20 min,  0 users,  load average: 0.00, 0.02, 0.04
	Linux old-k8s-version-019549 5.10.207 #1 SMP Tue Jul 16 20:46:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000a69320)
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]: goroutine 154 [select]:
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000743ef0, 0x4f0ac20, 0xc000c5dea0, 0x1, 0xc0001020c0)
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000254460, 0xc0001020c0)
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c73160, 0xc000c83e60)
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 17 19:00:39 old-k8s-version-019549 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 17 19:00:39 old-k8s-version-019549 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 19:00:39 old-k8s-version-019549 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 19:00:40 old-k8s-version-019549 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 147.
	Jul 17 19:00:40 old-k8s-version-019549 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 19:00:40 old-k8s-version-019549 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 19:00:40 old-k8s-version-019549 kubelet[6951]: I0717 19:00:40.703458    6951 server.go:416] Version: v1.20.0
	Jul 17 19:00:40 old-k8s-version-019549 kubelet[6951]: I0717 19:00:40.703952    6951 server.go:837] Client rotation is on, will bootstrap in background
	Jul 17 19:00:40 old-k8s-version-019549 kubelet[6951]: I0717 19:00:40.706852    6951 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 19:00:40 old-k8s-version-019549 kubelet[6951]: W0717 19:00:40.708152    6951 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 17 19:00:40 old-k8s-version-019549 kubelet[6951]: I0717 19:00:40.708619    6951 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019549 -n old-k8s-version-019549
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-019549 -n old-k8s-version-019549: exit status 2 (219.480867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-019549" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (185.55s)

                                                
                                    

Test pass (250/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.83
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.30.2/json-events 11.58
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.05
18 TestDownloadOnly/v1.30.2/DeleteAll 0.12
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 17.78
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.54
31 TestOffline 59.99
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
36 TestAddons/Setup 201.46
38 TestAddons/parallel/Registry 32.67
40 TestAddons/parallel/InspektorGadget 11.05
42 TestAddons/parallel/HelmTiller 22.13
44 TestAddons/parallel/CSI 61.19
45 TestAddons/parallel/Headlamp 23.13
46 TestAddons/parallel/CloudSpanner 5.55
47 TestAddons/parallel/LocalPath 12.08
48 TestAddons/parallel/NvidiaDevicePlugin 6.69
49 TestAddons/parallel/Yakd 6.01
53 TestAddons/serial/GCPAuth/Namespaces 0.12
55 TestCertOptions 76.04
56 TestCertExpiration 286.92
58 TestForceSystemdFlag 79.99
59 TestForceSystemdEnv 83.51
61 TestKVMDriverInstallOrUpdate 3.8
65 TestErrorSpam/setup 38.2
66 TestErrorSpam/start 0.32
67 TestErrorSpam/status 0.69
68 TestErrorSpam/pause 1.45
69 TestErrorSpam/unpause 1.5
70 TestErrorSpam/stop 4.46
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 90.99
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 39.63
77 TestFunctional/serial/KubeContext 0.05
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.63
82 TestFunctional/serial/CacheCmd/cache/add_local 1.98
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
90 TestFunctional/serial/ExtraConfig 34.46
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.44
93 TestFunctional/serial/LogsFileCmd 1.31
94 TestFunctional/serial/InvalidService 3.96
96 TestFunctional/parallel/ConfigCmd 0.29
97 TestFunctional/parallel/DashboardCmd 11.38
98 TestFunctional/parallel/DryRun 0.25
99 TestFunctional/parallel/InternationalLanguage 0.13
100 TestFunctional/parallel/StatusCmd 0.96
104 TestFunctional/parallel/ServiceCmdConnect 8.43
105 TestFunctional/parallel/AddonsCmd 0.11
106 TestFunctional/parallel/PersistentVolumeClaim 44.72
108 TestFunctional/parallel/SSHCmd 0.38
109 TestFunctional/parallel/CpCmd 1.2
110 TestFunctional/parallel/MySQL 22.42
111 TestFunctional/parallel/FileSync 0.2
112 TestFunctional/parallel/CertSync 1.25
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
120 TestFunctional/parallel/License 0.55
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.42
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
127 TestFunctional/parallel/ImageCommands/ImageBuild 3.38
128 TestFunctional/parallel/ImageCommands/Setup 1.77
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
132 TestFunctional/parallel/ServiceCmd/DeployApp 50.19
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.21
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.46
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.89
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.06
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.33
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.57
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
150 TestFunctional/parallel/ProfileCmd/profile_list 0.27
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
152 TestFunctional/parallel/MountCmd/any-port 8.21
153 TestFunctional/parallel/MountCmd/specific-port 1.88
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.56
155 TestFunctional/parallel/ServiceCmd/List 1.21
156 TestFunctional/parallel/ServiceCmd/JSONOutput 1.21
157 TestFunctional/parallel/ServiceCmd/HTTPS 0.26
158 TestFunctional/parallel/ServiceCmd/Format 0.26
159 TestFunctional/parallel/ServiceCmd/URL 0.26
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 268.2
167 TestMultiControlPlane/serial/DeployApp 6.13
168 TestMultiControlPlane/serial/PingHostFromPods 1.14
169 TestMultiControlPlane/serial/AddWorkerNode 51.48
170 TestMultiControlPlane/serial/NodeLabels 0.06
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.51
172 TestMultiControlPlane/serial/CopyFile 12.36
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.15
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
181 TestMultiControlPlane/serial/RestartCluster 346.36
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
183 TestMultiControlPlane/serial/AddSecondaryNode 75.7
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.5
188 TestJSONOutput/start/Command 53.67
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.69
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.59
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.33
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 78.93
220 TestMountStart/serial/StartWithMountFirst 24.75
221 TestMountStart/serial/VerifyMountFirst 0.36
222 TestMountStart/serial/StartWithMountSecond 24.49
223 TestMountStart/serial/VerifyMountSecond 0.36
224 TestMountStart/serial/DeleteFirst 0.67
225 TestMountStart/serial/VerifyMountPostDelete 0.36
226 TestMountStart/serial/Stop 1.26
227 TestMountStart/serial/RestartStopped 19.89
228 TestMountStart/serial/VerifyMountPostStop 0.35
231 TestMultiNode/serial/FreshStart2Nodes 115.16
232 TestMultiNode/serial/DeployApp2Nodes 5.15
233 TestMultiNode/serial/PingHostFrom2Pods 0.74
234 TestMultiNode/serial/AddNode 47.74
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.2
237 TestMultiNode/serial/CopyFile 6.86
238 TestMultiNode/serial/StopNode 2.17
239 TestMultiNode/serial/StartAfterStop 38.46
241 TestMultiNode/serial/DeleteNode 2.22
243 TestMultiNode/serial/RestartMultiNode 180.12
244 TestMultiNode/serial/ValidateNameConflict 40.56
251 TestScheduledStopUnix 113.01
255 TestRunningBinaryUpgrade 229.37
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
264 TestNoKubernetes/serial/StartWithK8s 70.15
269 TestNetworkPlugins/group/false 2.87
273 TestNoKubernetes/serial/StartWithStopK8s 62.73
274 TestNoKubernetes/serial/Start 39.25
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
276 TestNoKubernetes/serial/ProfileList 1.71
277 TestNoKubernetes/serial/Stop 1.39
278 TestNoKubernetes/serial/StartNoArgs 44.13
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
287 TestStoppedBinaryUpgrade/Setup 2.24
288 TestStoppedBinaryUpgrade/Upgrade 110.89
290 TestPause/serial/Start 132.71
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
292 TestNetworkPlugins/group/auto/Start 58.14
294 TestNetworkPlugins/group/kindnet/Start 73.1
295 TestNetworkPlugins/group/auto/KubeletFlags 0.21
296 TestNetworkPlugins/group/auto/NetCatPod 11.21
297 TestNetworkPlugins/group/auto/DNS 0.16
298 TestNetworkPlugins/group/auto/Localhost 0.14
299 TestNetworkPlugins/group/auto/HairPin 0.13
300 TestNetworkPlugins/group/calico/Start 84.91
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
303 TestNetworkPlugins/group/kindnet/NetCatPod 10.21
304 TestNetworkPlugins/group/kindnet/DNS 0.19
305 TestNetworkPlugins/group/kindnet/Localhost 0.16
306 TestNetworkPlugins/group/kindnet/HairPin 0.16
307 TestNetworkPlugins/group/custom-flannel/Start 103.43
308 TestNetworkPlugins/group/calico/ControllerPod 6.01
309 TestNetworkPlugins/group/calico/KubeletFlags 0.21
310 TestNetworkPlugins/group/calico/NetCatPod 13.24
311 TestNetworkPlugins/group/calico/DNS 0.15
312 TestNetworkPlugins/group/calico/Localhost 0.13
313 TestNetworkPlugins/group/calico/HairPin 0.12
314 TestNetworkPlugins/group/bridge/Start 95.54
315 TestNetworkPlugins/group/flannel/Start 86.69
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.24
318 TestNetworkPlugins/group/custom-flannel/DNS 0.19
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
321 TestNetworkPlugins/group/enable-default-cni/Start 61.95
322 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
323 TestNetworkPlugins/group/bridge/NetCatPod 11.22
324 TestNetworkPlugins/group/flannel/ControllerPod 6.01
325 TestNetworkPlugins/group/bridge/DNS 0.16
326 TestNetworkPlugins/group/bridge/Localhost 0.13
327 TestNetworkPlugins/group/bridge/HairPin 0.14
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
329 TestNetworkPlugins/group/flannel/NetCatPod 12.27
330 TestNetworkPlugins/group/flannel/DNS 0.17
331 TestNetworkPlugins/group/flannel/Localhost 0.13
332 TestNetworkPlugins/group/flannel/HairPin 0.14
336 TestStartStop/group/no-preload/serial/FirstStart 116.42
337 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
338 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
339 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
340 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
341 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
343 TestStartStop/group/embed-certs/serial/FirstStart 63.73
344 TestStartStop/group/embed-certs/serial/DeployApp 9.27
345 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.93
347 TestStartStop/group/no-preload/serial/DeployApp 11.27
349 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 95.22
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.27
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.88
358 TestStartStop/group/embed-certs/serial/SecondStart 672.18
360 TestStartStop/group/no-preload/serial/SecondStart 599.2
361 TestStartStop/group/old-k8s-version/serial/Stop 2.28
362 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
365 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 524.5
375 TestStartStop/group/newest-cni/serial/FirstStart 43.51
376 TestStartStop/group/newest-cni/serial/DeployApp 0
377 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
378 TestStartStop/group/newest-cni/serial/Stop 10.47
379 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
380 TestStartStop/group/newest-cni/serial/SecondStart 33.18
381 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
384 TestStartStop/group/newest-cni/serial/Pause 2.21
x
+
TestDownloadOnly/v1.20.0/json-events (23.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-285503 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-285503 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.833667837s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-285503
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-285503: exit status 85 (53.633299ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-285503 | jenkins | v1.33.1 | 17 Jul 24 17:11 UTC |          |
	|         | -p download-only-285503        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 17:11:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 17:11:25.339452   21589 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:11:25.339538   21589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:11:25.339542   21589 out.go:304] Setting ErrFile to fd 2...
	I0717 17:11:25.339547   21589 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:11:25.339708   21589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	W0717 17:11:25.339803   21589 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19283-14386/.minikube/config/config.json: open /home/jenkins/minikube-integration/19283-14386/.minikube/config/config.json: no such file or directory
	I0717 17:11:25.340313   21589 out.go:298] Setting JSON to true
	I0717 17:11:25.341149   21589 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3228,"bootTime":1721233057,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 17:11:25.341207   21589 start.go:139] virtualization: kvm guest
	I0717 17:11:25.343693   21589 out.go:97] [download-only-285503] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0717 17:11:25.343792   21589 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 17:11:25.343843   21589 notify.go:220] Checking for updates...
	I0717 17:11:25.345212   21589 out.go:169] MINIKUBE_LOCATION=19283
	I0717 17:11:25.346553   21589 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 17:11:25.347854   21589 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:11:25.349003   21589 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:11:25.350286   21589 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 17:11:25.352593   21589 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 17:11:25.352848   21589 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 17:11:25.450250   21589 out.go:97] Using the kvm2 driver based on user configuration
	I0717 17:11:25.450277   21589 start.go:297] selected driver: kvm2
	I0717 17:11:25.450285   21589 start.go:901] validating driver "kvm2" against <nil>
	I0717 17:11:25.450632   21589 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:11:25.450747   21589 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 17:11:25.465052   21589 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 17:11:25.465108   21589 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 17:11:25.465612   21589 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 17:11:25.465748   21589 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 17:11:25.465769   21589 cni.go:84] Creating CNI manager for ""
	I0717 17:11:25.465776   21589 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 17:11:25.465783   21589 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 17:11:25.465832   21589 start.go:340] cluster config:
	{Name:download-only-285503 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-285503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:11:25.466015   21589 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:11:25.467899   21589 out.go:97] Downloading VM boot image ...
	I0717 17:11:25.467941   21589 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19283-14386/.minikube/cache/iso/amd64/minikube-v1.33.1-1721146474-19264-amd64.iso
	I0717 17:11:36.442833   21589 out.go:97] Starting "download-only-285503" primary control-plane node in "download-only-285503" cluster
	I0717 17:11:36.442860   21589 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 17:11:36.537517   21589 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0717 17:11:36.537552   21589 cache.go:56] Caching tarball of preloaded images
	I0717 17:11:36.537710   21589 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0717 17:11:36.539587   21589 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0717 17:11:36.539614   21589 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 17:11:36.638167   21589 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-285503 host does not exist
	  To start a cluster, run: "minikube start -p download-only-285503"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-285503
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (11.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-840522 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-840522 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.578633376s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (11.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-840522
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-840522: exit status 85 (52.952293ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-285503 | jenkins | v1.33.1 | 17 Jul 24 17:11 UTC |                     |
	|         | -p download-only-285503        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 17 Jul 24 17:11 UTC | 17 Jul 24 17:11 UTC |
	| delete  | -p download-only-285503        | download-only-285503 | jenkins | v1.33.1 | 17 Jul 24 17:11 UTC | 17 Jul 24 17:11 UTC |
	| start   | -o=json --download-only        | download-only-840522 | jenkins | v1.33.1 | 17 Jul 24 17:11 UTC |                     |
	|         | -p download-only-840522        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 17:11:49
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 17:11:49.471622   21849 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:11:49.471721   21849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:11:49.471729   21849 out.go:304] Setting ErrFile to fd 2...
	I0717 17:11:49.471734   21849 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:11:49.471910   21849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:11:49.472447   21849 out.go:298] Setting JSON to true
	I0717 17:11:49.473360   21849 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3252,"bootTime":1721233057,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 17:11:49.473414   21849 start.go:139] virtualization: kvm guest
	I0717 17:11:49.475567   21849 out.go:97] [download-only-840522] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 17:11:49.475715   21849 notify.go:220] Checking for updates...
	I0717 17:11:49.477184   21849 out.go:169] MINIKUBE_LOCATION=19283
	I0717 17:11:49.478624   21849 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 17:11:49.479863   21849 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:11:49.481201   21849 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:11:49.482301   21849 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 17:11:49.484606   21849 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 17:11:49.484855   21849 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 17:11:49.515768   21849 out.go:97] Using the kvm2 driver based on user configuration
	I0717 17:11:49.515793   21849 start.go:297] selected driver: kvm2
	I0717 17:11:49.515806   21849 start.go:901] validating driver "kvm2" against <nil>
	I0717 17:11:49.516114   21849 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:11:49.516186   21849 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 17:11:49.530390   21849 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 17:11:49.530447   21849 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 17:11:49.530913   21849 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 17:11:49.531072   21849 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 17:11:49.531094   21849 cni.go:84] Creating CNI manager for ""
	I0717 17:11:49.531100   21849 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 17:11:49.531110   21849 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 17:11:49.531166   21849 start.go:340] cluster config:
	{Name:download-only-840522 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-840522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:11:49.531268   21849 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:11:49.532993   21849 out.go:97] Starting "download-only-840522" primary control-plane node in "download-only-840522" cluster
	I0717 17:11:49.533017   21849 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:11:50.041054   21849 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 17:11:50.041086   21849 cache.go:56] Caching tarball of preloaded images
	I0717 17:11:50.041232   21849 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime crio
	I0717 17:11:50.043080   21849 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0717 17:11:50.043094   21849 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0717 17:11:50.141349   21849 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:cd14409e225276132db5cf7d5d75c2d2 -> /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4
	I0717 17:11:59.443693   21849 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	I0717 17:11:59.443806   21849 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-840522 host does not exist
	  To start a cluster, run: "minikube start -p download-only-840522"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-840522
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (17.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-865281 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-865281 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (17.777633284s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (17.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-865281
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-865281: exit status 85 (57.116483ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-285503 | jenkins | v1.33.1 | 17 Jul 24 17:11 UTC |                     |
	|         | -p download-only-285503             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 17:11 UTC | 17 Jul 24 17:11 UTC |
	| delete  | -p download-only-285503             | download-only-285503 | jenkins | v1.33.1 | 17 Jul 24 17:11 UTC | 17 Jul 24 17:11 UTC |
	| start   | -o=json --download-only             | download-only-840522 | jenkins | v1.33.1 | 17 Jul 24 17:11 UTC |                     |
	|         | -p download-only-840522             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:12 UTC |
	| delete  | -p download-only-840522             | download-only-840522 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC | 17 Jul 24 17:12 UTC |
	| start   | -o=json --download-only             | download-only-865281 | jenkins | v1.33.1 | 17 Jul 24 17:12 UTC |                     |
	|         | -p download-only-865281             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 17:12:01
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 17:12:01.345952   22056 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:12:01.346040   22056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:12:01.346047   22056 out.go:304] Setting ErrFile to fd 2...
	I0717 17:12:01.346051   22056 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:12:01.346194   22056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:12:01.346716   22056 out.go:298] Setting JSON to true
	I0717 17:12:01.347478   22056 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3264,"bootTime":1721233057,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 17:12:01.347527   22056 start.go:139] virtualization: kvm guest
	I0717 17:12:01.349519   22056 out.go:97] [download-only-865281] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 17:12:01.349636   22056 notify.go:220] Checking for updates...
	I0717 17:12:01.350895   22056 out.go:169] MINIKUBE_LOCATION=19283
	I0717 17:12:01.352316   22056 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 17:12:01.353470   22056 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:12:01.354596   22056 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:12:01.355756   22056 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 17:12:01.357851   22056 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 17:12:01.358064   22056 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 17:12:01.389689   22056 out.go:97] Using the kvm2 driver based on user configuration
	I0717 17:12:01.389718   22056 start.go:297] selected driver: kvm2
	I0717 17:12:01.389726   22056 start.go:901] validating driver "kvm2" against <nil>
	I0717 17:12:01.390030   22056 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:12:01.390119   22056 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19283-14386/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0717 17:12:01.404345   22056 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0717 17:12:01.404394   22056 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 17:12:01.404860   22056 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0717 17:12:01.405027   22056 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 17:12:01.405088   22056 cni.go:84] Creating CNI manager for ""
	I0717 17:12:01.405105   22056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0717 17:12:01.405114   22056 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 17:12:01.405177   22056 start.go:340] cluster config:
	{Name:download-only-865281 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-865281 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:12:01.405467   22056 iso.go:125] acquiring lock: {Name:mk51ed12bcfc9e673ec68e34040c2adda4f249c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 17:12:01.407144   22056 out.go:97] Starting "download-only-865281" primary control-plane node in "download-only-865281" cluster
	I0717 17:12:01.407161   22056 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 17:12:01.542568   22056 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0717 17:12:01.542605   22056 cache.go:56] Caching tarball of preloaded images
	I0717 17:12:01.542759   22056 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0717 17:12:01.544789   22056 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0717 17:12:01.544813   22056 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0717 17:12:01.639913   22056 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19283-14386/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-865281 host does not exist
	  To start a cluster, run: "minikube start -p download-only-865281"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-865281
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-325566 --alsologtostderr --binary-mirror http://127.0.0.1:45523 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-325566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-325566
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (59.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-406802 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-406802 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (58.774404494s)
helpers_test.go:175: Cleaning up "offline-crio-406802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-406802
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-406802: (1.219168997s)
--- PASS: TestOffline (59.99s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-435911
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-435911: exit status 85 (44.470635ms)

                                                
                                                
-- stdout --
	* Profile "addons-435911" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-435911"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-435911
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-435911: exit status 85 (44.757524ms)

                                                
                                                
-- stdout --
	* Profile "addons-435911" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-435911"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (201.46s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-435911 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-435911 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m21.456086528s)
--- PASS: TestAddons/Setup (201.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (32.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 17.438848ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-k8vqb" [b2c62d08-0816-405d-b5e4-78e70611f29b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.008271998s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qxnzl" [a6c49b2c-06f8-4825-b8b7-d2233c0cb798] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005467668s
addons_test.go:342: (dbg) Run:  kubectl --context addons-435911 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-435911 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-435911 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (19.924627438s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 ip
2024/07/17 17:16:13 [DEBUG] GET http://192.168.39.27:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (32.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.05s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4wtq7" [1d849eff-6172-4493-91c1-d21431b233c0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006105535s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-435911
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-435911: (6.040092975s)
--- PASS: TestAddons/parallel/InspektorGadget (11.05s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (22.13s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 17.328943ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-4vwq8" [bb7ff47b-ce42-448a-bc9b-96324fdaac73] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005485455s
addons_test.go:475: (dbg) Run:  kubectl --context addons-435911 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-435911 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (15.377513801s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (22.13s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 6.007471ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-435911 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-435911 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [dc4d46ad-93a8-4cce-98ec-63b79e464444] Pending
helpers_test.go:344: "task-pv-pod" [dc4d46ad-93a8-4cce-98ec-63b79e464444] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [dc4d46ad-93a8-4cce-98ec-63b79e464444] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 26.003579411s
addons_test.go:586: (dbg) Run:  kubectl --context addons-435911 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-435911 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-435911 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-435911 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-435911 delete pod task-pv-pod: (1.353676392s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-435911 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-435911 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-435911 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c36c3731-173a-42e1-bea1-632637e4cf88] Pending
helpers_test.go:344: "task-pv-pod-restore" [c36c3731-173a-42e1-bea1-632637e4cf88] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c36c3731-173a-42e1-bea1-632637e4cf88] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003632019s
addons_test.go:628: (dbg) Run:  kubectl --context addons-435911 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-435911 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-435911 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-435911 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.71801864s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (61.19s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-435911 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-znd2v" [46cfb6c7-3a68-411b-968e-8ab21c2226ff] Pending
helpers_test.go:344: "headlamp-7867546754-znd2v" [46cfb6c7-3a68-411b-968e-8ab21c2226ff] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-znd2v" [46cfb6c7-3a68-411b-968e-8ab21c2226ff] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 22.262030437s
--- PASS: TestAddons/parallel/Headlamp (23.13s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-m82fj" [4ececc38-dcb0-4f31-8b4a-c54efd1dfdb9] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003401018s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-435911
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-435911 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-435911 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435911 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b47c5fe2-f12b-4678-8c95-8e7a559a5074] Pending
helpers_test.go:344: "test-local-path" [b47c5fe2-f12b-4678-8c95-8e7a559a5074] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b47c5fe2-f12b-4678-8c95-8e7a559a5074] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b47c5fe2-f12b-4678-8c95-8e7a559a5074] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003879972s
addons_test.go:992: (dbg) Run:  kubectl --context addons-435911 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 ssh "cat /opt/local-path-provisioner/pvc-f3597c1f-ead9-4165-91c7-88a61a002e8f_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-435911 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-435911 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-435911 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xst8q" [a0449eb2-9a20-4b3a-b414-1a8ca2c38090] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005995006s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-435911
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.69s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-gj64l" [d75d651e-dc3f-4ea9-b380-f7637ab4ce97] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004422408s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-435911 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-435911 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (76.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-175280 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-175280 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m14.656649057s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-175280 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-175280 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-175280 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-175280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-175280
--- PASS: TestCertOptions (76.04s)

                                                
                                    
x
+
TestCertExpiration (286.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-907422 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-907422 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m12.35615571s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-907422 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-907422 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (33.763381594s)
helpers_test.go:175: Cleaning up "cert-expiration-907422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-907422
--- PASS: TestCertExpiration (286.92s)

                                                
                                    
x
+
TestForceSystemdFlag (79.99s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-293455 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0717 18:20:41.791730   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-293455 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m19.020676223s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-293455 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-293455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-293455
--- PASS: TestForceSystemdFlag (79.99s)

                                                
                                    
x
+
TestForceSystemdEnv (83.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-552189 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-552189 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.542185407s)
helpers_test.go:175: Cleaning up "force-systemd-env-552189" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-552189
--- PASS: TestForceSystemdEnv (83.51s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.8s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.80s)

                                                
                                    
x
+
TestErrorSpam/setup (38.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-939112 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-939112 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-939112 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-939112 --driver=kvm2  --container-runtime=crio: (38.2047529s)
--- PASS: TestErrorSpam/setup (38.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (4.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 stop: (1.512287478s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-939112 --log_dir /tmp/nospam-939112 stop: (1.960921713s)
--- PASS: TestErrorSpam/stop (4.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19283-14386/.minikube/files/etc/test/nested/copy/21577/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (90.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174661 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0717 17:25:41.791812   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:25:41.797385   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:25:41.807659   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:25:41.828642   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:25:41.868890   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:25:41.949177   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:25:42.109593   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:25:42.430094   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:25:43.071011   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:25:44.351491   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:25:46.911960   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:25:52.033166   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:26:02.273294   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:26:22.754121   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-174661 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m30.986290263s)
--- PASS: TestFunctional/serial/StartWithProxy (90.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174661 --alsologtostderr -v=8
E0717 17:27:03.714367   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-174661 --alsologtostderr -v=8: (39.628608406s)
functional_test.go:659: soft start took 39.62918566s for "functional-174661" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-174661 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-174661 cache add registry.k8s.io/pause:3.1: (1.170389003s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-174661 cache add registry.k8s.io/pause:3.3: (1.343120992s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-174661 cache add registry.k8s.io/pause:latest: (1.11651247s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-174661 /tmp/TestFunctionalserialCacheCmdcacheadd_local2768755562/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 cache add minikube-local-cache-test:functional-174661
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-174661 cache add minikube-local-cache-test:functional-174661: (1.686358373s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 cache delete minikube-local-cache-test:functional-174661
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-174661
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174661 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (204.29156ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 kubectl -- --context functional-174661 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-174661 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174661 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-174661 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.461822011s)
functional_test.go:757: restart took 34.461927599s for "functional-174661" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.46s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-174661 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-174661 logs: (1.444398392s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 logs --file /tmp/TestFunctionalserialLogsFileCmd1234543362/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-174661 logs --file /tmp/TestFunctionalserialLogsFileCmd1234543362/001/logs.txt: (1.3126323s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-174661 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-174661
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-174661: exit status 115 (261.035752ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.77:32224 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-174661 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174661 config get cpus: exit status 14 (50.559888ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174661 config get cpus: exit status 14 (43.092042ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-174661 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-174661 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 31572: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174661 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-174661 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (128.868609ms)

                                                
                                                
-- stdout --
	* [functional-174661] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:28:47.146332   31454 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:28:47.146659   31454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:28:47.146672   31454 out.go:304] Setting ErrFile to fd 2...
	I0717 17:28:47.146681   31454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:28:47.146972   31454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:28:47.147617   31454 out.go:298] Setting JSON to false
	I0717 17:28:47.148860   31454 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4270,"bootTime":1721233057,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 17:28:47.148938   31454 start.go:139] virtualization: kvm guest
	I0717 17:28:47.151252   31454 out.go:177] * [functional-174661] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 17:28:47.152591   31454 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 17:28:47.152588   31454 notify.go:220] Checking for updates...
	I0717 17:28:47.154863   31454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 17:28:47.156043   31454 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:28:47.157367   31454 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:28:47.158525   31454 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 17:28:47.159654   31454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 17:28:47.161364   31454 config.go:182] Loaded profile config "functional-174661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:28:47.161964   31454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:28:47.162043   31454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:28:47.178918   31454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42977
	I0717 17:28:47.179336   31454 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:28:47.179865   31454 main.go:141] libmachine: Using API Version  1
	I0717 17:28:47.179881   31454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:28:47.180255   31454 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:28:47.180446   31454 main.go:141] libmachine: (functional-174661) Calling .DriverName
	I0717 17:28:47.180678   31454 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 17:28:47.180963   31454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:28:47.180999   31454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:28:47.195751   31454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38859
	I0717 17:28:47.196137   31454 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:28:47.196611   31454 main.go:141] libmachine: Using API Version  1
	I0717 17:28:47.196629   31454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:28:47.196915   31454 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:28:47.197118   31454 main.go:141] libmachine: (functional-174661) Calling .DriverName
	I0717 17:28:47.228443   31454 out.go:177] * Using the kvm2 driver based on existing profile
	I0717 17:28:47.229644   31454 start.go:297] selected driver: kvm2
	I0717 17:28:47.229660   31454 start.go:901] validating driver "kvm2" against &{Name:functional-174661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-174661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:28:47.229768   31454 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 17:28:47.231733   31454 out.go:177] 
	W0717 17:28:47.232979   31454 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 17:28:47.234178   31454 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174661 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174661 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-174661 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (125.485004ms)

                                                
                                                
-- stdout --
	* [functional-174661] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 17:28:47.393833   31509 out.go:291] Setting OutFile to fd 1 ...
	I0717 17:28:47.393934   31509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:28:47.393944   31509 out.go:304] Setting ErrFile to fd 2...
	I0717 17:28:47.393948   31509 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 17:28:47.394214   31509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 17:28:47.394689   31509 out.go:298] Setting JSON to false
	I0717 17:28:47.395600   31509 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4270,"bootTime":1721233057,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 17:28:47.395656   31509 start.go:139] virtualization: kvm guest
	I0717 17:28:47.397816   31509 out.go:177] * [functional-174661] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0717 17:28:47.399227   31509 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 17:28:47.399273   31509 notify.go:220] Checking for updates...
	I0717 17:28:47.401767   31509 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 17:28:47.402934   31509 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 17:28:47.404143   31509 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 17:28:47.405338   31509 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 17:28:47.406438   31509 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 17:28:47.407975   31509 config.go:182] Loaded profile config "functional-174661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 17:28:47.408334   31509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:28:47.408393   31509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:28:47.422797   31509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39603
	I0717 17:28:47.423156   31509 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:28:47.423641   31509 main.go:141] libmachine: Using API Version  1
	I0717 17:28:47.423660   31509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:28:47.423953   31509 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:28:47.424144   31509 main.go:141] libmachine: (functional-174661) Calling .DriverName
	I0717 17:28:47.424340   31509 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 17:28:47.424616   31509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 17:28:47.424649   31509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 17:28:47.440072   31509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46199
	I0717 17:28:47.440486   31509 main.go:141] libmachine: () Calling .GetVersion
	I0717 17:28:47.441049   31509 main.go:141] libmachine: Using API Version  1
	I0717 17:28:47.441072   31509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 17:28:47.441372   31509 main.go:141] libmachine: () Calling .GetMachineName
	I0717 17:28:47.441584   31509 main.go:141] libmachine: (functional-174661) Calling .DriverName
	I0717 17:28:47.473338   31509 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0717 17:28:47.474954   31509 start.go:297] selected driver: kvm2
	I0717 17:28:47.474972   31509 start.go:901] validating driver "kvm2" against &{Name:functional-174661 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19264/minikube-v1.33.1-1721146474-19264-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-174661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 17:28:47.475071   31509 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 17:28:47.477151   31509 out.go:177] 
	W0717 17:28:47.478557   31509 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 17:28:47.479771   31509 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-174661 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-174661 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-5745d" [afd4cd59-95ef-465a-aaed-c7deda6cd03b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-5745d" [afd4cd59-95ef-465a-aaed-c7deda6cd03b] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.00509785s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.77:32049
functional_test.go:1671: http://192.168.39.77:32049: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-5745d

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.77:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.77:32049
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.43s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a1cfe311-5048-4320-83ea-c3b21564037f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004443351s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-174661 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-174661 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-174661 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-174661 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ace001fd-0863-406a-8c8a-e0f7cda7f33b] Pending
helpers_test.go:344: "sp-pod" [ace001fd-0863-406a-8c8a-e0f7cda7f33b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ace001fd-0863-406a-8c8a-e0f7cda7f33b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.004674919s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-174661 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-174661 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-174661 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [23d56cee-dca6-41b2-9e59-ee75c52a9729] Pending
helpers_test.go:344: "sp-pod" [23d56cee-dca6-41b2-9e59-ee75c52a9729] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [23d56cee-dca6-41b2-9e59-ee75c52a9729] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004307504s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-174661 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.72s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh -n functional-174661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 cp functional-174661:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2743575136/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh -n functional-174661 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh -n functional-174661 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-174661 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-szdtc" [ccaa74a6-13d9-43df-a97f-e9a972870d1d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-szdtc" [ccaa74a6-13d9-43df-a97f-e9a972870d1d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003243117s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-174661 exec mysql-64454c8b5c-szdtc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-174661 exec mysql-64454c8b5c-szdtc -- mysql -ppassword -e "show databases;": exit status 1 (606.184167ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-174661 exec mysql-64454c8b5c-szdtc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/21577/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "sudo cat /etc/test/nested/copy/21577/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/21577.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "sudo cat /etc/ssl/certs/21577.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/21577.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "sudo cat /usr/share/ca-certificates/21577.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/215772.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "sudo cat /etc/ssl/certs/215772.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/215772.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "sudo cat /usr/share/ca-certificates/215772.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-174661 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174661 ssh "sudo systemctl is-active docker": exit status 1 (223.802021ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174661 ssh "sudo systemctl is-active containerd": exit status 1 (203.999572ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174661 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-174661
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240513-cd2ac642
docker.io/kicbase/echo-server:functional-174661
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-174661 image ls --format short --alsologtostderr:
I0717 17:28:57.940402   32193 out.go:291] Setting OutFile to fd 1 ...
I0717 17:28:57.940493   32193 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:28:57.940503   32193 out.go:304] Setting ErrFile to fd 2...
I0717 17:28:57.940507   32193 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:28:57.940673   32193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
I0717 17:28:57.941203   32193 config.go:182] Loaded profile config "functional-174661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 17:28:57.941298   32193 config.go:182] Loaded profile config "functional-174661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 17:28:57.941741   32193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 17:28:57.941790   32193 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:28:57.957058   32193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
I0717 17:28:57.957491   32193 main.go:141] libmachine: () Calling .GetVersion
I0717 17:28:57.958010   32193 main.go:141] libmachine: Using API Version  1
I0717 17:28:57.958032   32193 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:28:57.958369   32193 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:28:57.958603   32193 main.go:141] libmachine: (functional-174661) Calling .GetState
I0717 17:28:57.960519   32193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 17:28:57.960558   32193 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:28:57.974825   32193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
I0717 17:28:57.975292   32193 main.go:141] libmachine: () Calling .GetVersion
I0717 17:28:57.975841   32193 main.go:141] libmachine: Using API Version  1
I0717 17:28:57.975868   32193 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:28:57.976163   32193 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:28:57.976396   32193 main.go:141] libmachine: (functional-174661) Calling .DriverName
I0717 17:28:57.976589   32193 ssh_runner.go:195] Run: systemctl --version
I0717 17:28:57.976611   32193 main.go:141] libmachine: (functional-174661) Calling .GetSSHHostname
I0717 17:28:57.979280   32193 main.go:141] libmachine: (functional-174661) DBG | domain functional-174661 has defined MAC address 52:54:00:58:8c:dd in network mk-functional-174661
I0717 17:28:57.979747   32193 main.go:141] libmachine: (functional-174661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8c:dd", ip: ""} in network mk-functional-174661: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:34 +0000 UTC Type:0 Mac:52:54:00:58:8c:dd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-174661 Clientid:01:52:54:00:58:8c:dd}
I0717 17:28:57.979768   32193 main.go:141] libmachine: (functional-174661) DBG | domain functional-174661 has defined IP address 192.168.39.77 and MAC address 52:54:00:58:8c:dd in network mk-functional-174661
I0717 17:28:57.979837   32193 main.go:141] libmachine: (functional-174661) Calling .GetSSHPort
I0717 17:28:57.979991   32193 main.go:141] libmachine: (functional-174661) Calling .GetSSHKeyPath
I0717 17:28:57.980179   32193 main.go:141] libmachine: (functional-174661) Calling .GetSSHUsername
I0717 17:28:57.980421   32193 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/functional-174661/id_rsa Username:docker}
I0717 17:28:58.059499   32193 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 17:28:58.094387   32193 main.go:141] libmachine: Making call to close driver server
I0717 17:28:58.094404   32193 main.go:141] libmachine: (functional-174661) Calling .Close
I0717 17:28:58.094698   32193 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:28:58.094715   32193 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:28:58.094730   32193 main.go:141] libmachine: Making call to close driver server
I0717 17:28:58.094739   32193 main.go:141] libmachine: (functional-174661) Calling .Close
I0717 17:28:58.094942   32193 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:28:58.094956   32193 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:28:58.094972   32193 main.go:141] libmachine: (functional-174661) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174661 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kicbase/echo-server           | functional-174661  | 9056ab77afb8e | 4.94MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | fffffc90d343c | 192MB  |
| localhost/minikube-local-cache-test     | functional-174661  | 498a72bcac42e | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.30.2            | 56ce0fd9fb532 | 118MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20240513-cd2ac642 | ac1c61439df46 | 65.9MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-scheduler          | v1.30.2            | 7820c83aa1394 | 63.1MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.30.2            | e874818b3caac | 112MB  |
| registry.k8s.io/kube-proxy              | v1.30.2            | 53c535741fb44 | 86MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-174661 image ls --format table --alsologtostderr:
I0717 17:28:59.150290   32320 out.go:291] Setting OutFile to fd 1 ...
I0717 17:28:59.150556   32320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:28:59.150565   32320 out.go:304] Setting ErrFile to fd 2...
I0717 17:28:59.150569   32320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:28:59.150743   32320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
I0717 17:28:59.151288   32320 config.go:182] Loaded profile config "functional-174661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 17:28:59.151381   32320 config.go:182] Loaded profile config "functional-174661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 17:28:59.151745   32320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 17:28:59.151793   32320 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:28:59.166392   32320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
I0717 17:28:59.166800   32320 main.go:141] libmachine: () Calling .GetVersion
I0717 17:28:59.167360   32320 main.go:141] libmachine: Using API Version  1
I0717 17:28:59.167379   32320 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:28:59.167695   32320 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:28:59.167889   32320 main.go:141] libmachine: (functional-174661) Calling .GetState
I0717 17:28:59.169629   32320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 17:28:59.169665   32320 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:28:59.183701   32320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38473
I0717 17:28:59.184142   32320 main.go:141] libmachine: () Calling .GetVersion
I0717 17:28:59.184638   32320 main.go:141] libmachine: Using API Version  1
I0717 17:28:59.184662   32320 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:28:59.184956   32320 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:28:59.185122   32320 main.go:141] libmachine: (functional-174661) Calling .DriverName
I0717 17:28:59.185307   32320 ssh_runner.go:195] Run: systemctl --version
I0717 17:28:59.185350   32320 main.go:141] libmachine: (functional-174661) Calling .GetSSHHostname
I0717 17:28:59.187936   32320 main.go:141] libmachine: (functional-174661) DBG | domain functional-174661 has defined MAC address 52:54:00:58:8c:dd in network mk-functional-174661
I0717 17:28:59.188314   32320 main.go:141] libmachine: (functional-174661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8c:dd", ip: ""} in network mk-functional-174661: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:34 +0000 UTC Type:0 Mac:52:54:00:58:8c:dd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-174661 Clientid:01:52:54:00:58:8c:dd}
I0717 17:28:59.188347   32320 main.go:141] libmachine: (functional-174661) DBG | domain functional-174661 has defined IP address 192.168.39.77 and MAC address 52:54:00:58:8c:dd in network mk-functional-174661
I0717 17:28:59.188469   32320 main.go:141] libmachine: (functional-174661) Calling .GetSSHPort
I0717 17:28:59.188647   32320 main.go:141] libmachine: (functional-174661) Calling .GetSSHKeyPath
I0717 17:28:59.188844   32320 main.go:141] libmachine: (functional-174661) Calling .GetSSHUsername
I0717 17:28:59.188994   32320 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/functional-174661/id_rsa Username:docker}
I0717 17:28:59.280862   32320 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 17:28:59.359607   32320 main.go:141] libmachine: Making call to close driver server
I0717 17:28:59.359626   32320 main.go:141] libmachine: (functional-174661) Calling .Close
I0717 17:28:59.359986   32320 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:28:59.360005   32320 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:28:59.360014   32320 main.go:141] libmachine: Making call to close driver server
I0717 17:28:59.360013   32320 main.go:141] libmachine: (functional-174661) DBG | Closing plugin on server side
I0717 17:28:59.360022   32320 main.go:141] libmachine: (functional-174661) Calling .Close
I0717 17:28:59.360288   32320 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:28:59.360307   32320 main.go:141] libmachine: (functional-174661) DBG | Closing plugin on server side
I0717 17:28:59.360314   32320 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174661 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-174661"],"size":"4943877"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"i
d":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e","registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"112194888"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df","docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244"],"repoTags":["docker.io/library/nginx:latest"],"size":"191746190"},{"id":"cbb01a7bd410dc08ba382018ab
909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"498a72bcac42e72243f30f977c35e0066533af5ddc915a03b9b8f62e01d84056"
,"repoDigests":["localhost/minikube-local-cache-test@sha256:cffc37d201432f9be97c37604b6136794785a096e02dc7eb198d81dea5036c04"],"repoTags":["localhost/minikube-local-cache-test:functional-174661"],"size":"3328"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816","registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117609954"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f","repoDigests":["docker.io/ki
ndest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266","docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"65908273"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a5384
10","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":["registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961","registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"85953433"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc","registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"63051080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/p
ause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-174661 image ls --format json --alsologtostderr:
I0717 17:28:58.908080   32296 out.go:291] Setting OutFile to fd 1 ...
I0717 17:28:58.908204   32296 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:28:58.908215   32296 out.go:304] Setting ErrFile to fd 2...
I0717 17:28:58.908221   32296 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:28:58.908574   32296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
I0717 17:28:58.909385   32296 config.go:182] Loaded profile config "functional-174661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 17:28:58.909537   32296 config.go:182] Loaded profile config "functional-174661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 17:28:58.910090   32296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 17:28:58.910140   32296 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:28:58.925234   32296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32867
I0717 17:28:58.925766   32296 main.go:141] libmachine: () Calling .GetVersion
I0717 17:28:58.926453   32296 main.go:141] libmachine: Using API Version  1
I0717 17:28:58.926484   32296 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:28:58.926861   32296 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:28:58.927063   32296 main.go:141] libmachine: (functional-174661) Calling .GetState
I0717 17:28:58.929196   32296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 17:28:58.929250   32296 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:28:58.945002   32296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
I0717 17:28:58.945440   32296 main.go:141] libmachine: () Calling .GetVersion
I0717 17:28:58.946001   32296 main.go:141] libmachine: Using API Version  1
I0717 17:28:58.946047   32296 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:28:58.946349   32296 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:28:58.946605   32296 main.go:141] libmachine: (functional-174661) Calling .DriverName
I0717 17:28:58.946811   32296 ssh_runner.go:195] Run: systemctl --version
I0717 17:28:58.946839   32296 main.go:141] libmachine: (functional-174661) Calling .GetSSHHostname
I0717 17:28:58.949833   32296 main.go:141] libmachine: (functional-174661) DBG | domain functional-174661 has defined MAC address 52:54:00:58:8c:dd in network mk-functional-174661
I0717 17:28:58.950251   32296 main.go:141] libmachine: (functional-174661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8c:dd", ip: ""} in network mk-functional-174661: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:34 +0000 UTC Type:0 Mac:52:54:00:58:8c:dd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-174661 Clientid:01:52:54:00:58:8c:dd}
I0717 17:28:58.950283   32296 main.go:141] libmachine: (functional-174661) DBG | domain functional-174661 has defined IP address 192.168.39.77 and MAC address 52:54:00:58:8c:dd in network mk-functional-174661
I0717 17:28:58.950418   32296 main.go:141] libmachine: (functional-174661) Calling .GetSSHPort
I0717 17:28:58.950654   32296 main.go:141] libmachine: (functional-174661) Calling .GetSSHKeyPath
I0717 17:28:58.950815   32296 main.go:141] libmachine: (functional-174661) Calling .GetSSHUsername
I0717 17:28:58.950962   32296 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/functional-174661/id_rsa Username:docker}
I0717 17:28:59.054437   32296 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 17:28:59.105450   32296 main.go:141] libmachine: Making call to close driver server
I0717 17:28:59.105471   32296 main.go:141] libmachine: (functional-174661) Calling .Close
I0717 17:28:59.105757   32296 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:28:59.105776   32296 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:28:59.105796   32296 main.go:141] libmachine: Making call to close driver server
I0717 17:28:59.105804   32296 main.go:141] libmachine: (functional-174661) Calling .Close
I0717 17:28:59.106051   32296 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:28:59.106064   32296 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:28:59.106083   32296 main.go:141] libmachine: (functional-174661) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174661 image ls --format yaml --alsologtostderr:
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests:
- registry.k8s.io/kube-proxy@sha256:854b9a1bb27a6b3ee8e7345f459aaed19944febdaef0a3dfda783896ee8ed961
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "85953433"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
- registry.k8s.io/kube-scheduler@sha256:15e2a8d20a932559fe81b5a0b110e169d160edb92280d39a454f6ce3e358558b
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "63051080"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-174661
size: "4943877"
- id: ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f
repoDigests:
- docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "65908273"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0cb852fbc04062fd3331a27a83bf68d627ad09107fe8c846c6d666d4ee0c4816
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117609954"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
- docker.io/library/nginx@sha256:db5e49f40979ce521f05f0bc9f513d0abacce47904e229f3a95c2e6d9b47f244
repoTags:
- docker.io/library/nginx:latest
size: "191746190"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 498a72bcac42e72243f30f977c35e0066533af5ddc915a03b9b8f62e01d84056
repoDigests:
- localhost/minikube-local-cache-test@sha256:cffc37d201432f9be97c37604b6136794785a096e02dc7eb198d81dea5036c04
repoTags:
- localhost/minikube-local-cache-test:functional-174661
size: "3328"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
- registry.k8s.io/kube-controller-manager@sha256:78b1a11c01b8ab34320ae3e12f6d620e4ccba4b1ca070a1ade2336fe78d8e39b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "112194888"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-174661 image ls --format yaml --alsologtostderr:
I0717 17:28:58.138473   32216 out.go:291] Setting OutFile to fd 1 ...
I0717 17:28:58.138565   32216 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:28:58.138573   32216 out.go:304] Setting ErrFile to fd 2...
I0717 17:28:58.138577   32216 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:28:58.138746   32216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
I0717 17:28:58.139232   32216 config.go:182] Loaded profile config "functional-174661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 17:28:58.139321   32216 config.go:182] Loaded profile config "functional-174661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 17:28:58.139712   32216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 17:28:58.139739   32216 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:28:58.154748   32216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33561
I0717 17:28:58.155190   32216 main.go:141] libmachine: () Calling .GetVersion
I0717 17:28:58.155735   32216 main.go:141] libmachine: Using API Version  1
I0717 17:28:58.155757   32216 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:28:58.156072   32216 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:28:58.156218   32216 main.go:141] libmachine: (functional-174661) Calling .GetState
I0717 17:28:58.158179   32216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 17:28:58.158214   32216 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:28:58.172254   32216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42905
I0717 17:28:58.172617   32216 main.go:141] libmachine: () Calling .GetVersion
I0717 17:28:58.173056   32216 main.go:141] libmachine: Using API Version  1
I0717 17:28:58.173075   32216 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:28:58.173380   32216 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:28:58.173552   32216 main.go:141] libmachine: (functional-174661) Calling .DriverName
I0717 17:28:58.173742   32216 ssh_runner.go:195] Run: systemctl --version
I0717 17:28:58.173771   32216 main.go:141] libmachine: (functional-174661) Calling .GetSSHHostname
I0717 17:28:58.176225   32216 main.go:141] libmachine: (functional-174661) DBG | domain functional-174661 has defined MAC address 52:54:00:58:8c:dd in network mk-functional-174661
I0717 17:28:58.176589   32216 main.go:141] libmachine: (functional-174661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8c:dd", ip: ""} in network mk-functional-174661: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:34 +0000 UTC Type:0 Mac:52:54:00:58:8c:dd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-174661 Clientid:01:52:54:00:58:8c:dd}
I0717 17:28:58.176612   32216 main.go:141] libmachine: (functional-174661) DBG | domain functional-174661 has defined IP address 192.168.39.77 and MAC address 52:54:00:58:8c:dd in network mk-functional-174661
I0717 17:28:58.176754   32216 main.go:141] libmachine: (functional-174661) Calling .GetSSHPort
I0717 17:28:58.176901   32216 main.go:141] libmachine: (functional-174661) Calling .GetSSHKeyPath
I0717 17:28:58.177053   32216 main.go:141] libmachine: (functional-174661) Calling .GetSSHUsername
I0717 17:28:58.177170   32216 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/functional-174661/id_rsa Username:docker}
I0717 17:28:58.255536   32216 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 17:28:58.288246   32216 main.go:141] libmachine: Making call to close driver server
I0717 17:28:58.288258   32216 main.go:141] libmachine: (functional-174661) Calling .Close
I0717 17:28:58.288523   32216 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:28:58.288540   32216 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:28:58.288554   32216 main.go:141] libmachine: (functional-174661) DBG | Closing plugin on server side
I0717 17:28:58.288557   32216 main.go:141] libmachine: Making call to close driver server
I0717 17:28:58.288616   32216 main.go:141] libmachine: (functional-174661) Calling .Close
I0717 17:28:58.288851   32216 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:28:58.288864   32216 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:28:58.288888   32216 main.go:141] libmachine: (functional-174661) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh pgrep buildkitd
2024/07/17 17:28:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174661 ssh pgrep buildkitd: exit status 1 (177.253792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image build -t localhost/my-image:functional-174661 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-174661 image build -t localhost/my-image:functional-174661 testdata/build --alsologtostderr: (2.9975599s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174661 image build -t localhost/my-image:functional-174661 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 86b31f3ea1d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-174661
--> 0ee13d0e35c
Successfully tagged localhost/my-image:functional-174661
0ee13d0e35c93b75b57445e7453d7ddc6796494990a00d59741d32df04d1f134
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-174661 image build -t localhost/my-image:functional-174661 testdata/build --alsologtostderr:
I0717 17:28:58.511932   32271 out.go:291] Setting OutFile to fd 1 ...
I0717 17:28:58.512255   32271 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:28:58.512343   32271 out.go:304] Setting ErrFile to fd 2...
I0717 17:28:58.512353   32271 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 17:28:58.512620   32271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
I0717 17:28:58.513189   32271 config.go:182] Loaded profile config "functional-174661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 17:28:58.513709   32271 config.go:182] Loaded profile config "functional-174661": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
I0717 17:28:58.514031   32271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 17:28:58.514080   32271 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:28:58.528650   32271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32937
I0717 17:28:58.529065   32271 main.go:141] libmachine: () Calling .GetVersion
I0717 17:28:58.529716   32271 main.go:141] libmachine: Using API Version  1
I0717 17:28:58.529739   32271 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:28:58.530080   32271 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:28:58.530262   32271 main.go:141] libmachine: (functional-174661) Calling .GetState
I0717 17:28:58.531958   32271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0717 17:28:58.531988   32271 main.go:141] libmachine: Launching plugin server for driver kvm2
I0717 17:28:58.546078   32271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
I0717 17:28:58.546376   32271 main.go:141] libmachine: () Calling .GetVersion
I0717 17:28:58.546825   32271 main.go:141] libmachine: Using API Version  1
I0717 17:28:58.546842   32271 main.go:141] libmachine: () Calling .SetConfigRaw
I0717 17:28:58.547143   32271 main.go:141] libmachine: () Calling .GetMachineName
I0717 17:28:58.547327   32271 main.go:141] libmachine: (functional-174661) Calling .DriverName
I0717 17:28:58.547553   32271 ssh_runner.go:195] Run: systemctl --version
I0717 17:28:58.547574   32271 main.go:141] libmachine: (functional-174661) Calling .GetSSHHostname
I0717 17:28:58.550256   32271 main.go:141] libmachine: (functional-174661) DBG | domain functional-174661 has defined MAC address 52:54:00:58:8c:dd in network mk-functional-174661
I0717 17:28:58.550680   32271 main.go:141] libmachine: (functional-174661) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:8c:dd", ip: ""} in network mk-functional-174661: {Iface:virbr1 ExpiryTime:2024-07-17 18:25:34 +0000 UTC Type:0 Mac:52:54:00:58:8c:dd Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:functional-174661 Clientid:01:52:54:00:58:8c:dd}
I0717 17:28:58.550709   32271 main.go:141] libmachine: (functional-174661) DBG | domain functional-174661 has defined IP address 192.168.39.77 and MAC address 52:54:00:58:8c:dd in network mk-functional-174661
I0717 17:28:58.550896   32271 main.go:141] libmachine: (functional-174661) Calling .GetSSHPort
I0717 17:28:58.551037   32271 main.go:141] libmachine: (functional-174661) Calling .GetSSHKeyPath
I0717 17:28:58.551175   32271 main.go:141] libmachine: (functional-174661) Calling .GetSSHUsername
I0717 17:28:58.551266   32271 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/functional-174661/id_rsa Username:docker}
I0717 17:28:58.635483   32271 build_images.go:161] Building image from path: /tmp/build.1508628803.tar
I0717 17:28:58.635550   32271 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 17:28:58.650183   32271 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1508628803.tar
I0717 17:28:58.658447   32271 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1508628803.tar: stat -c "%s %y" /var/lib/minikube/build/build.1508628803.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1508628803.tar': No such file or directory
I0717 17:28:58.658476   32271 ssh_runner.go:362] scp /tmp/build.1508628803.tar --> /var/lib/minikube/build/build.1508628803.tar (3072 bytes)
I0717 17:28:58.700688   32271 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1508628803
I0717 17:28:58.717558   32271 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1508628803 -xf /var/lib/minikube/build/build.1508628803.tar
I0717 17:28:58.738307   32271 crio.go:315] Building image: /var/lib/minikube/build/build.1508628803
I0717 17:28:58.738371   32271 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-174661 /var/lib/minikube/build/build.1508628803 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0717 17:29:01.439934   32271 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-174661 /var/lib/minikube/build/build.1508628803 --cgroup-manager=cgroupfs: (2.70153663s)
I0717 17:29:01.439988   32271 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1508628803
I0717 17:29:01.453575   32271 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1508628803.tar
I0717 17:29:01.462924   32271 build_images.go:217] Built localhost/my-image:functional-174661 from /tmp/build.1508628803.tar
I0717 17:29:01.462957   32271 build_images.go:133] succeeded building to: functional-174661
I0717 17:29:01.462963   32271 build_images.go:134] failed building to: 
I0717 17:29:01.462991   32271 main.go:141] libmachine: Making call to close driver server
I0717 17:29:01.463006   32271 main.go:141] libmachine: (functional-174661) Calling .Close
I0717 17:29:01.463246   32271 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:29:01.463260   32271 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:29:01.463268   32271 main.go:141] libmachine: Making call to close driver server
I0717 17:29:01.463275   32271 main.go:141] libmachine: (functional-174661) Calling .Close
I0717 17:29:01.463473   32271 main.go:141] libmachine: Successfully made call to close driver server
I0717 17:29:01.463485   32271 main.go:141] libmachine: Making call to close connection to plugin binary
I0717 17:29:01.463502   32271 main.go:141] libmachine: (functional-174661) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.754048817s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-174661
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (50.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-174661 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-174661 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-ccsl9" [2ce9a9d4-507a-41c3-b52c-ddd28068559c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-ccsl9" [2ce9a9d4-507a-41c3-b52c-ddd28068559c] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 50.004364183s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (50.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image load --daemon docker.io/kicbase/echo-server:functional-174661 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-174661 image load --daemon docker.io/kicbase/echo-server:functional-174661 --alsologtostderr: (1.009612649s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image load --daemon docker.io/kicbase/echo-server:functional-174661 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
E0717 17:28:25.635457   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-174661
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image load --daemon docker.io/kicbase/echo-server:functional-174661 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image save docker.io/kicbase/echo-server:functional-174661 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-174661 image save docker.io/kicbase/echo-server:functional-174661 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.062398831s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image rm docker.io/kicbase/echo-server:functional-174661 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-174661 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.390095474s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-174661
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 image save --daemon docker.io/kicbase/echo-server:functional-174661 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-174661 image save --daemon docker.io/kicbase/echo-server:functional-174661 --alsologtostderr: (1.537217386s)
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-174661
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "218.687859ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "48.199844ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "229.085293ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "41.467463ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-174661 /tmp/TestFunctionalparallelMountCmdany-port4096003533/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721237325528923422" to /tmp/TestFunctionalparallelMountCmdany-port4096003533/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721237325528923422" to /tmp/TestFunctionalparallelMountCmdany-port4096003533/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721237325528923422" to /tmp/TestFunctionalparallelMountCmdany-port4096003533/001/test-1721237325528923422
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174661 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (181.953028ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 17:28 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 17:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 17:28 test-1721237325528923422
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh cat /mount-9p/test-1721237325528923422
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-174661 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1eb203f8-58cb-4d00-9a0d-711e0bef2ad6] Pending
helpers_test.go:344: "busybox-mount" [1eb203f8-58cb-4d00-9a0d-711e0bef2ad6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1eb203f8-58cb-4d00-9a0d-711e0bef2ad6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1eb203f8-58cb-4d00-9a0d-711e0bef2ad6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004586767s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-174661 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174661 /tmp/TestFunctionalparallelMountCmdany-port4096003533/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-174661 /tmp/TestFunctionalparallelMountCmdspecific-port2683362357/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174661 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (230.568039ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174661 /tmp/TestFunctionalparallelMountCmdspecific-port2683362357/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174661 ssh "sudo umount -f /mount-9p": exit status 1 (271.133405ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-174661 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174661 /tmp/TestFunctionalparallelMountCmdspecific-port2683362357/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-174661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3062546235/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-174661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3062546235/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-174661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3062546235/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174661 ssh "findmnt -T" /mount1: exit status 1 (310.51146ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-174661 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3062546235/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3062546235/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174661 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3062546235/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-174661 service list: (1.210175829s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-174661 service list -o json: (1.213814912s)
functional_test.go:1490: Took "1.213919032s" to run "out/minikube-linux-amd64 -p functional-174661 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.77:31558
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-174661 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.77:31558
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-174661
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-174661
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-174661
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (268.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-174628 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 17:30:41.791497   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:31:09.475794   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 17:33:21.395596   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:33:21.400973   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:33:21.411265   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:33:21.431640   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:33:21.471918   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:33:21.552244   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:33:21.712751   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:33:22.033624   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:33:22.674747   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:33:23.955512   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:33:26.516028   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:33:31.637110   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:33:41.877505   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-174628 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m27.566349974s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (268.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-174628 -- rollout status deployment/busybox: (4.127357419s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-5mnv5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-8zv26 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-ftgzz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-5mnv5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-8zv26 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-ftgzz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-5mnv5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-8zv26 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-ftgzz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-5mnv5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-5mnv5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-8zv26 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-8zv26 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-ftgzz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-174628 -- exec busybox-fc5497c4f-ftgzz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (51.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-174628 -v=7 --alsologtostderr
E0717 17:34:02.358460   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-174628 -v=7 --alsologtostderr: (50.683080212s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (51.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-174628 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0717 17:34:43.318907   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp testdata/cp-test.txt ha-174628:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3227756898/001/cp-test_ha-174628.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628:/home/docker/cp-test.txt ha-174628-m02:/home/docker/cp-test_ha-174628_ha-174628-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m02 "sudo cat /home/docker/cp-test_ha-174628_ha-174628-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628:/home/docker/cp-test.txt ha-174628-m03:/home/docker/cp-test_ha-174628_ha-174628-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m03 "sudo cat /home/docker/cp-test_ha-174628_ha-174628-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628:/home/docker/cp-test.txt ha-174628-m04:/home/docker/cp-test_ha-174628_ha-174628-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m04 "sudo cat /home/docker/cp-test_ha-174628_ha-174628-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp testdata/cp-test.txt ha-174628-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3227756898/001/cp-test_ha-174628-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628-m02:/home/docker/cp-test.txt ha-174628:/home/docker/cp-test_ha-174628-m02_ha-174628.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628 "sudo cat /home/docker/cp-test_ha-174628-m02_ha-174628.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628-m02:/home/docker/cp-test.txt ha-174628-m03:/home/docker/cp-test_ha-174628-m02_ha-174628-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m03 "sudo cat /home/docker/cp-test_ha-174628-m02_ha-174628-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628-m02:/home/docker/cp-test.txt ha-174628-m04:/home/docker/cp-test_ha-174628-m02_ha-174628-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m04 "sudo cat /home/docker/cp-test_ha-174628-m02_ha-174628-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp testdata/cp-test.txt ha-174628-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3227756898/001/cp-test_ha-174628-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt ha-174628:/home/docker/cp-test_ha-174628-m03_ha-174628.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628 "sudo cat /home/docker/cp-test_ha-174628-m03_ha-174628.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt ha-174628-m02:/home/docker/cp-test_ha-174628-m03_ha-174628-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m02 "sudo cat /home/docker/cp-test_ha-174628-m03_ha-174628-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628-m03:/home/docker/cp-test.txt ha-174628-m04:/home/docker/cp-test_ha-174628-m03_ha-174628-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m04 "sudo cat /home/docker/cp-test_ha-174628-m03_ha-174628-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp testdata/cp-test.txt ha-174628-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3227756898/001/cp-test_ha-174628-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt ha-174628:/home/docker/cp-test_ha-174628-m04_ha-174628.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628 "sudo cat /home/docker/cp-test_ha-174628-m04_ha-174628.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt ha-174628-m02:/home/docker/cp-test_ha-174628-m04_ha-174628-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m02 "sudo cat /home/docker/cp-test_ha-174628-m04_ha-174628-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 cp ha-174628-m04:/home/docker/cp-test.txt ha-174628-m03:/home/docker/cp-test_ha-174628-m04_ha-174628-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 ssh -n ha-174628-m03 "sudo cat /home/docker/cp-test_ha-174628-m04_ha-174628-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.471868488s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-174628 node delete m03 -v=7 --alsologtostderr: (16.457126382s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (346.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-174628 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 17:48:21.396210   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:49:44.441893   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:50:41.791349   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-174628 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m45.62304988s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (346.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-174628 --control-plane -v=7 --alsologtostderr
E0717 17:53:21.395201   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-174628 --control-plane -v=7 --alsologtostderr: (1m14.894734056s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-174628 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.50s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-738163 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-738163 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (53.673937541s)
--- PASS: TestJSONOutput/start/Command (53.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-738163 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-738163 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-738163 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-738163 --output=json --user=testUser: (7.329270198s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-098757 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-098757 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (55.790595ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"24049fc6-bc33-42f6-856d-3ab8d1c7e200","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-098757] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"131c445d-d489-4837-a76e-0d3438cf63a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19283"}}
	{"specversion":"1.0","id":"e4db097b-bb24-4d0f-9507-8e01aba63beb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e6348063-f50f-433e-b82f-c846a54c5812","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig"}}
	{"specversion":"1.0","id":"093c6f43-aafd-4ade-8e4d-49f2c25f96b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube"}}
	{"specversion":"1.0","id":"7afcbfc8-970a-42ba-b8f2-a631350bce7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f1c7ff10-2a81-42a4-be9e-a835ad18870a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a4537c32-65bc-4c01-bf6d-6e0f670b3979","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-098757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-098757
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (78.93s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-524419 --driver=kvm2  --container-runtime=crio
E0717 17:55:41.790985   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-524419 --driver=kvm2  --container-runtime=crio: (38.625772473s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-526652 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-526652 --driver=kvm2  --container-runtime=crio: (37.528569806s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-524419
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-526652
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-526652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-526652
helpers_test.go:175: Cleaning up "first-524419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-524419
--- PASS: TestMinikubeProfile (78.93s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-376194 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-376194 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.752516466s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-376194 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-376194 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-391863 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-391863 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.490332804s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-391863 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-391863 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-376194 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-391863 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-391863 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-391863
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-391863: (1.263899778s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-391863
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-391863: (18.885494547s)
--- PASS: TestMountStart/serial/RestartStopped (19.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-391863 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-391863 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-866205 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 17:58:21.395778   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 17:58:44.836610   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-866205 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.767845332s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-866205 -- rollout status deployment/busybox: (3.82362258s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- exec busybox-fc5497c4f-d5rwl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- exec busybox-fc5497c4f-pkq4s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- exec busybox-fc5497c4f-d5rwl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- exec busybox-fc5497c4f-pkq4s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- exec busybox-fc5497c4f-d5rwl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- exec busybox-fc5497c4f-pkq4s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- exec busybox-fc5497c4f-d5rwl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- exec busybox-fc5497c4f-d5rwl -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- exec busybox-fc5497c4f-pkq4s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-866205 -- exec busybox-fc5497c4f-pkq4s -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-866205 -v 3 --alsologtostderr
E0717 18:00:41.791367   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-866205 -v 3 --alsologtostderr: (47.193375644s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-866205 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 cp testdata/cp-test.txt multinode-866205:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 cp multinode-866205:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1415765283/001/cp-test_multinode-866205.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 cp multinode-866205:/home/docker/cp-test.txt multinode-866205-m02:/home/docker/cp-test_multinode-866205_multinode-866205-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205-m02 "sudo cat /home/docker/cp-test_multinode-866205_multinode-866205-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 cp multinode-866205:/home/docker/cp-test.txt multinode-866205-m03:/home/docker/cp-test_multinode-866205_multinode-866205-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205-m03 "sudo cat /home/docker/cp-test_multinode-866205_multinode-866205-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 cp testdata/cp-test.txt multinode-866205-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 cp multinode-866205-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1415765283/001/cp-test_multinode-866205-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 cp multinode-866205-m02:/home/docker/cp-test.txt multinode-866205:/home/docker/cp-test_multinode-866205-m02_multinode-866205.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205 "sudo cat /home/docker/cp-test_multinode-866205-m02_multinode-866205.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 cp multinode-866205-m02:/home/docker/cp-test.txt multinode-866205-m03:/home/docker/cp-test_multinode-866205-m02_multinode-866205-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205-m03 "sudo cat /home/docker/cp-test_multinode-866205-m02_multinode-866205-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 cp testdata/cp-test.txt multinode-866205-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 cp multinode-866205-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1415765283/001/cp-test_multinode-866205-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 cp multinode-866205-m03:/home/docker/cp-test.txt multinode-866205:/home/docker/cp-test_multinode-866205-m03_multinode-866205.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205 "sudo cat /home/docker/cp-test_multinode-866205-m03_multinode-866205.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 cp multinode-866205-m03:/home/docker/cp-test.txt multinode-866205-m02:/home/docker/cp-test_multinode-866205-m03_multinode-866205-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 ssh -n multinode-866205-m02 "sudo cat /home/docker/cp-test_multinode-866205-m03_multinode-866205-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-866205 node stop m03: (1.366432319s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-866205 status: exit status 7 (400.313787ms)

                                                
                                                
-- stdout --
	multinode-866205
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-866205-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-866205-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-866205 status --alsologtostderr: exit status 7 (405.159013ms)

                                                
                                                
-- stdout --
	multinode-866205
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-866205-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-866205-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:00:52.186739   49950 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:00:52.186859   49950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:00:52.186868   49950 out.go:304] Setting ErrFile to fd 2...
	I0717 18:00:52.186874   49950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:00:52.187042   49950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:00:52.187195   49950 out.go:298] Setting JSON to false
	I0717 18:00:52.187223   49950 mustload.go:65] Loading cluster: multinode-866205
	I0717 18:00:52.187268   49950 notify.go:220] Checking for updates...
	I0717 18:00:52.187732   49950 config.go:182] Loaded profile config "multinode-866205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:00:52.187753   49950 status.go:255] checking status of multinode-866205 ...
	I0717 18:00:52.188159   49950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:00:52.188203   49950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:00:52.207941   49950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I0717 18:00:52.208440   49950 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:00:52.209006   49950 main.go:141] libmachine: Using API Version  1
	I0717 18:00:52.209034   49950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:00:52.209412   49950 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:00:52.209588   49950 main.go:141] libmachine: (multinode-866205) Calling .GetState
	I0717 18:00:52.211183   49950 status.go:330] multinode-866205 host status = "Running" (err=<nil>)
	I0717 18:00:52.211199   49950 host.go:66] Checking if "multinode-866205" exists ...
	I0717 18:00:52.211486   49950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:00:52.211517   49950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:00:52.226160   49950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33701
	I0717 18:00:52.226496   49950 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:00:52.226922   49950 main.go:141] libmachine: Using API Version  1
	I0717 18:00:52.226946   49950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:00:52.227241   49950 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:00:52.227443   49950 main.go:141] libmachine: (multinode-866205) Calling .GetIP
	I0717 18:00:52.230096   49950 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:00:52.230502   49950 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:00:52.230529   49950 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:00:52.230660   49950 host.go:66] Checking if "multinode-866205" exists ...
	I0717 18:00:52.231042   49950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:00:52.231086   49950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:00:52.245707   49950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I0717 18:00:52.246090   49950 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:00:52.246656   49950 main.go:141] libmachine: Using API Version  1
	I0717 18:00:52.246677   49950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:00:52.247069   49950 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:00:52.247260   49950 main.go:141] libmachine: (multinode-866205) Calling .DriverName
	I0717 18:00:52.247481   49950 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:00:52.247508   49950 main.go:141] libmachine: (multinode-866205) Calling .GetSSHHostname
	I0717 18:00:52.250208   49950 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:00:52.250629   49950 main.go:141] libmachine: (multinode-866205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:5e:cb", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:58:07 +0000 UTC Type:0 Mac:52:54:00:27:5e:cb Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-866205 Clientid:01:52:54:00:27:5e:cb}
	I0717 18:00:52.250653   49950 main.go:141] libmachine: (multinode-866205) DBG | domain multinode-866205 has defined IP address 192.168.39.16 and MAC address 52:54:00:27:5e:cb in network mk-multinode-866205
	I0717 18:00:52.250754   49950 main.go:141] libmachine: (multinode-866205) Calling .GetSSHPort
	I0717 18:00:52.250929   49950 main.go:141] libmachine: (multinode-866205) Calling .GetSSHKeyPath
	I0717 18:00:52.251081   49950 main.go:141] libmachine: (multinode-866205) Calling .GetSSHUsername
	I0717 18:00:52.251229   49950 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/multinode-866205/id_rsa Username:docker}
	I0717 18:00:52.328763   49950 ssh_runner.go:195] Run: systemctl --version
	I0717 18:00:52.334809   49950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:00:52.349289   49950 kubeconfig.go:125] found "multinode-866205" server: "https://192.168.39.16:8443"
	I0717 18:00:52.349314   49950 api_server.go:166] Checking apiserver status ...
	I0717 18:00:52.349343   49950 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:00:52.362628   49950 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1164/cgroup
	W0717 18:00:52.371410   49950 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 18:00:52.371452   49950 ssh_runner.go:195] Run: ls
	I0717 18:00:52.375607   49950 api_server.go:253] Checking apiserver healthz at https://192.168.39.16:8443/healthz ...
	I0717 18:00:52.379634   49950 api_server.go:279] https://192.168.39.16:8443/healthz returned 200:
	ok
	I0717 18:00:52.379654   49950 status.go:422] multinode-866205 apiserver status = Running (err=<nil>)
	I0717 18:00:52.379662   49950 status.go:257] multinode-866205 status: &{Name:multinode-866205 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:00:52.379679   49950 status.go:255] checking status of multinode-866205-m02 ...
	I0717 18:00:52.379977   49950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:00:52.380017   49950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:00:52.394799   49950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40481
	I0717 18:00:52.395255   49950 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:00:52.395718   49950 main.go:141] libmachine: Using API Version  1
	I0717 18:00:52.395733   49950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:00:52.396024   49950 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:00:52.396227   49950 main.go:141] libmachine: (multinode-866205-m02) Calling .GetState
	I0717 18:00:52.397652   49950 status.go:330] multinode-866205-m02 host status = "Running" (err=<nil>)
	I0717 18:00:52.397665   49950 host.go:66] Checking if "multinode-866205-m02" exists ...
	I0717 18:00:52.397954   49950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:00:52.397997   49950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:00:52.412208   49950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39667
	I0717 18:00:52.412543   49950 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:00:52.413049   49950 main.go:141] libmachine: Using API Version  1
	I0717 18:00:52.413068   49950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:00:52.413344   49950 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:00:52.413516   49950 main.go:141] libmachine: (multinode-866205-m02) Calling .GetIP
	I0717 18:00:52.415961   49950 main.go:141] libmachine: (multinode-866205-m02) DBG | domain multinode-866205-m02 has defined MAC address 52:54:00:c3:0b:35 in network mk-multinode-866205
	I0717 18:00:52.416376   49950 main.go:141] libmachine: (multinode-866205-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:0b:35", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:59:15 +0000 UTC Type:0 Mac:52:54:00:c3:0b:35 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:multinode-866205-m02 Clientid:01:52:54:00:c3:0b:35}
	I0717 18:00:52.416416   49950 main.go:141] libmachine: (multinode-866205-m02) DBG | domain multinode-866205-m02 has defined IP address 192.168.39.113 and MAC address 52:54:00:c3:0b:35 in network mk-multinode-866205
	I0717 18:00:52.416518   49950 host.go:66] Checking if "multinode-866205-m02" exists ...
	I0717 18:00:52.416847   49950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:00:52.416883   49950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:00:52.431191   49950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37381
	I0717 18:00:52.431615   49950 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:00:52.432101   49950 main.go:141] libmachine: Using API Version  1
	I0717 18:00:52.432122   49950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:00:52.432417   49950 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:00:52.432594   49950 main.go:141] libmachine: (multinode-866205-m02) Calling .DriverName
	I0717 18:00:52.432828   49950 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:00:52.432850   49950 main.go:141] libmachine: (multinode-866205-m02) Calling .GetSSHHostname
	I0717 18:00:52.435317   49950 main.go:141] libmachine: (multinode-866205-m02) DBG | domain multinode-866205-m02 has defined MAC address 52:54:00:c3:0b:35 in network mk-multinode-866205
	I0717 18:00:52.435820   49950 main.go:141] libmachine: (multinode-866205-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c3:0b:35", ip: ""} in network mk-multinode-866205: {Iface:virbr1 ExpiryTime:2024-07-17 18:59:15 +0000 UTC Type:0 Mac:52:54:00:c3:0b:35 Iaid: IPaddr:192.168.39.113 Prefix:24 Hostname:multinode-866205-m02 Clientid:01:52:54:00:c3:0b:35}
	I0717 18:00:52.435846   49950 main.go:141] libmachine: (multinode-866205-m02) DBG | domain multinode-866205-m02 has defined IP address 192.168.39.113 and MAC address 52:54:00:c3:0b:35 in network mk-multinode-866205
	I0717 18:00:52.436003   49950 main.go:141] libmachine: (multinode-866205-m02) Calling .GetSSHPort
	I0717 18:00:52.436162   49950 main.go:141] libmachine: (multinode-866205-m02) Calling .GetSSHKeyPath
	I0717 18:00:52.436287   49950 main.go:141] libmachine: (multinode-866205-m02) Calling .GetSSHUsername
	I0717 18:00:52.436412   49950 sshutil.go:53] new ssh client: &{IP:192.168.39.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19283-14386/.minikube/machines/multinode-866205-m02/id_rsa Username:docker}
	I0717 18:00:52.519853   49950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:00:52.533010   49950 status.go:257] multinode-866205-m02 status: &{Name:multinode-866205-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 18:00:52.533036   49950 status.go:255] checking status of multinode-866205-m03 ...
	I0717 18:00:52.533372   49950 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0717 18:00:52.533414   49950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0717 18:00:52.549095   49950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33811
	I0717 18:00:52.549537   49950 main.go:141] libmachine: () Calling .GetVersion
	I0717 18:00:52.549998   49950 main.go:141] libmachine: Using API Version  1
	I0717 18:00:52.550034   49950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0717 18:00:52.550358   49950 main.go:141] libmachine: () Calling .GetMachineName
	I0717 18:00:52.550568   49950 main.go:141] libmachine: (multinode-866205-m03) Calling .GetState
	I0717 18:00:52.551966   49950 status.go:330] multinode-866205-m03 host status = "Stopped" (err=<nil>)
	I0717 18:00:52.551979   49950 status.go:343] host is not running, skipping remaining checks
	I0717 18:00:52.551984   49950 status.go:257] multinode-866205-m03 status: &{Name:multinode-866205-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-866205 node start m03 -v=7 --alsologtostderr: (37.842915548s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-866205 node delete m03: (1.725841745s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (180.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-866205 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0717 18:10:41.791834   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-866205 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m59.617370198s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-866205 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (180.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-866205
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-866205-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-866205-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (58.260069ms)

                                                
                                                
-- stdout --
	* [multinode-866205-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-866205-m02' is duplicated with machine name 'multinode-866205-m02' in profile 'multinode-866205'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-866205-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-866205-m03 --driver=kvm2  --container-runtime=crio: (39.47171587s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-866205
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-866205: exit status 80 (200.202398ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-866205 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-866205-m03 already exists in multinode-866205-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-866205-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.56s)

                                                
                                    
x
+
TestScheduledStopUnix (113.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-691701 --memory=2048 --driver=kvm2  --container-runtime=crio
E0717 18:18:21.395545   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-691701 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.494304491s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-691701 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-691701 -n scheduled-stop-691701
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-691701 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-691701 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-691701 -n scheduled-stop-691701
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-691701
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-691701 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-691701
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-691701: exit status 7 (63.542475ms)

                                                
                                                
-- stdout --
	scheduled-stop-691701
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-691701 -n scheduled-stop-691701
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-691701 -n scheduled-stop-691701: exit status 7 (60.62153ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-691701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-691701
--- PASS: TestScheduledStopUnix (113.01s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (229.37s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3765421528 start -p running-upgrade-475983 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3765421528 start -p running-upgrade-475983 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m13.652679996s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-475983 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-475983 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.455395082s)
helpers_test.go:175: Cleaning up "running-upgrade-475983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-475983
--- PASS: TestRunningBinaryUpgrade (229.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-456922 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-456922 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (70.279515ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-456922] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (70.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-456922 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-456922 --driver=kvm2  --container-runtime=crio: (1m9.912933588s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-456922 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (70.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-235476 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-235476 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (95.715557ms)

                                                
                                                
-- stdout --
	* [false-235476] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19283
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:19:36.574018   58100 out.go:291] Setting OutFile to fd 1 ...
	I0717 18:19:36.574254   58100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:19:36.574263   58100 out.go:304] Setting ErrFile to fd 2...
	I0717 18:19:36.574267   58100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 18:19:36.574466   58100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-14386/.minikube/bin
	I0717 18:19:36.575010   58100 out.go:298] Setting JSON to false
	I0717 18:19:36.575890   58100 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7320,"bootTime":1721233057,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:19:36.575941   58100 start.go:139] virtualization: kvm guest
	I0717 18:19:36.578275   58100 out.go:177] * [false-235476] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:19:36.579891   58100 notify.go:220] Checking for updates...
	I0717 18:19:36.579938   58100 out.go:177]   - MINIKUBE_LOCATION=19283
	I0717 18:19:36.581811   58100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:19:36.583385   58100 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19283-14386/kubeconfig
	I0717 18:19:36.584778   58100 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-14386/.minikube
	I0717 18:19:36.586132   58100 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:19:36.587544   58100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:19:36.589622   58100 config.go:182] Loaded profile config "NoKubernetes-456922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:19:36.589788   58100 config.go:182] Loaded profile config "offline-crio-406802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.2
	I0717 18:19:36.589901   58100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 18:19:36.622144   58100 out.go:177] * Using the kvm2 driver based on user configuration
	I0717 18:19:36.623441   58100 start.go:297] selected driver: kvm2
	I0717 18:19:36.623450   58100 start.go:901] validating driver "kvm2" against <nil>
	I0717 18:19:36.623460   58100 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:19:36.625660   58100 out.go:177] 
	W0717 18:19:36.626973   58100 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0717 18:19:36.628268   58100 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-235476 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-235476

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-235476

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-235476

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-235476

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-235476

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-235476

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-235476

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-235476

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-235476

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-235476

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-235476

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-235476" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-235476" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-235476

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-235476"

                                                
                                                
----------------------- debugLogs end: false-235476 [took: 2.638044109s] --------------------------------
helpers_test.go:175: Cleaning up "false-235476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-235476
--- PASS: TestNetworkPlugins/group/false (2.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (62.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-456922 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-456922 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m1.418254397s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-456922 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-456922 status -o json: exit status 2 (265.715697ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-456922","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-456922
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-456922: (1.045627231s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (62.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (39.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-456922 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-456922 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.251687993s)
--- PASS: TestNoKubernetes/serial/Start (39.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-456922 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-456922 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.279056ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.159996331s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-456922
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-456922: (1.389959312s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-456922 --driver=kvm2  --container-runtime=crio
E0717 18:23:04.442828   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-456922 --driver=kvm2  --container-runtime=crio: (44.127928763s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-456922 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-456922 "sudo systemctl is-active --quiet service kubelet": exit status 1 (190.139253ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (110.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1668713284 start -p stopped-upgrade-628280 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0717 18:23:21.395251   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1668713284 start -p stopped-upgrade-628280 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m9.044119325s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1668713284 -p stopped-upgrade-628280 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1668713284 -p stopped-upgrade-628280 stop: (2.144569899s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-628280 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-628280 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.697193193s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (110.89s)

                                                
                                    
x
+
TestPause/serial/Start (132.71s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-371172 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-371172 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m12.712735569s)
--- PASS: TestPause/serial/Start (132.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-628280
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (58.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (58.137643538s)
--- PASS: TestNetworkPlugins/group/auto/Start (58.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m13.103077737s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-235476 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-235476 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-78p4j" [ec4d571a-aeaa-477b-818d-2438cf5f14f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-78p4j" [ec4d571a-aeaa-477b-818d-2438cf5f14f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004339191s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-235476 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (84.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m24.905179602s)
--- PASS: TestNetworkPlugins/group/calico/Start (84.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-j8hlb" [f8779a5f-c03f-4b32-860b-bb7a0be1bca0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00581704s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-235476 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-235476 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-c2jfb" [96c67a79-1e5b-45f4-891a-a908c2db06ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-c2jfb" [96c67a79-1e5b-45f4-891a-a908c2db06ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004583359s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-235476 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (103.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m43.426690956s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (103.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-w5hkl" [29d8cf27-bba5-4d02-ad3b-d9df5719ad1b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004839163s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-235476 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-235476 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xhgmz" [252c6c0a-d25c-477c-9fa5-d817d540dd4a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-xhgmz" [252c6c0a-d25c-477c-9fa5-d817d540dd4a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.003647477s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-235476 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m35.538611566s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (86.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m26.69199266s)
--- PASS: TestNetworkPlugins/group/flannel/Start (86.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-235476 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-235476 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qwkhz" [4209b10c-825e-427e-bd2c-2a55d531081b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qwkhz" [4209b10c-825e-427e-bd2c-2a55d531081b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.003461514s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-235476 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (61.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-235476 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m1.951330438s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (61.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-235476 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-235476 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tv77r" [d6d19150-9874-4412-88e1-aa67726e7e62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tv77r" [d6d19150-9874-4412-88e1-aa67726e7e62] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003882849s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-b5hth" [f666ce6e-f78e-4322-979b-88bd040df623] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005169773s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-235476 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-235476 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-235476 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2mdth" [f6adfb1f-1304-4fe1-8efa-752030e26439] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2mdth" [f6adfb1f-1304-4fe1-8efa-752030e26439] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004060975s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-235476 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (116.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-066175 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-066175 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m56.421529988s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (116.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-235476 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-235476 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ztbbx" [72673aa4-03f7-42c5-9138-da5dd2d6fbaa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ztbbx" [72673aa4-03f7-42c5-9138-da5dd2d6fbaa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004001782s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-235476 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-235476 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0717 18:31:07.656532   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory
E0717 18:31:07.661822   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)
E0717 19:00:58.171292   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 19:01:07.656812   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (63.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-527415 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 18:31:28.141610   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory
E0717 18:31:48.622481   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory
E0717 18:32:04.838083   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 18:32:09.805815   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:32:09.811123   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:32:09.821421   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:32:09.841769   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:32:09.882104   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:32:09.962503   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:32:10.122647   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:32:10.443708   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:32:11.084787   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:32:12.365257   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:32:14.926033   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:32:20.046626   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-527415 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (1m3.726591469s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (63.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-527415 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2b3eadca-3d6f-477b-a234-a0f7fff2a3d2] Pending
helpers_test.go:344: "busybox" [2b3eadca-3d6f-477b-a234-a0f7fff2a3d2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0717 18:32:29.583289   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory
E0717 18:32:30.287457   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
helpers_test.go:344: "busybox" [2b3eadca-3d6f-477b-a234-a0f7fff2a3d2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003544022s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-527415 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-527415 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-527415 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-066175 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cb23026c-c2b1-4c68-91ee-66e1b965646f] Pending
E0717 18:32:50.768061   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
helpers_test.go:344: "busybox" [cb23026c-c2b1-4c68-91ee-66e1b965646f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cb23026c-c2b1-4c68-91ee-66e1b965646f] Running
E0717 18:32:59.763214   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:32:59.768437   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:32:59.778682   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:32:59.798990   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:32:59.839395   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:32:59.920499   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:33:00.080917   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:33:00.402037   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:33:01.042162   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.00498202s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-066175 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-022930 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-022930 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (1m35.220952545s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-066175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0717 18:33:02.322506   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-066175 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-022930 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [08a2f717-0bff-4460-9867-5f61919c8413] Pending
E0717 18:34:31.643389   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
helpers_test.go:344: "busybox" [08a2f717-0bff-4460-9867-5f61919c8413] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [08a2f717-0bff-4460-9867-5f61919c8413] Running
E0717 18:34:36.764207   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003563066s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-022930 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-022930 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-022930 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (672.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-527415 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 18:35:12.088938   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:35:12.094221   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:35:12.104485   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:35:12.124903   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:35:12.165208   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:35:12.245545   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:35:12.405919   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:35:12.726499   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:35:13.367198   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:35:14.648072   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:35:17.208567   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:35:19.279990   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:35:19.285253   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:35:19.295516   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:35:19.315853   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:35:19.356213   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:35:19.436574   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:35:19.597030   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:35:19.918048   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:35:20.559064   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-527415 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (11m11.923386033s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-527415 -n embed-certs-527415
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (672.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (599.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-066175 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 18:35:39.760725   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:35:41.791928   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 18:35:43.606867   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:35:48.446636   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:35:53.051274   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:35:58.172272   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:35:58.177547   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:35:58.187782   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:35:58.208028   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:35:58.248342   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:35:58.328658   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:35:58.489115   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:35:58.809747   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:35:59.449965   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:36:00.240878   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:36:00.730472   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:36:03.290945   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:36:07.655965   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory
E0717 18:36:08.412089   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:36:18.652695   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:36:34.012343   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:36:35.345328   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory
E0717 18:36:39.133041   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:36:41.201238   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-066175 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (9m58.960355258s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-066175 -n no-preload-066175
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (599.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-019549 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-019549 --alsologtostderr -v=3: (2.275554602s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-019549 -n old-k8s-version-019549: exit status 7 (61.474625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-019549 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (524.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-022930 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2
E0717 18:37:20.093859   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:37:37.490989   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:37:55.932935   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:37:59.762782   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:38:03.121825   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:38:21.395289   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 18:38:27.447917   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:38:42.014507   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:39:26.523919   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:39:44.443665   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 18:39:54.207731   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:40:12.088976   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:40:19.280067   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:40:39.773403   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:40:41.791220   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/addons-435911/client.crt: no such file or directory
E0717 18:40:46.962632   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
E0717 18:40:58.172215   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:41:07.656164   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/auto-235476/client.crt: no such file or directory
E0717 18:41:25.855184   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/enable-default-cni-235476/client.crt: no such file or directory
E0717 18:42:09.805488   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
E0717 18:42:59.762762   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/calico-235476/client.crt: no such file or directory
E0717 18:43:21.395188   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/functional-174661/client.crt: no such file or directory
E0717 18:44:26.523798   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/custom-flannel-235476/client.crt: no such file or directory
E0717 18:45:12.089148   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/bridge-235476/client.crt: no such file or directory
E0717 18:45:19.279306   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/flannel-235476/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-022930 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.2: (8m44.248476089s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-022930 -n default-k8s-diff-port-022930
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (524.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-875270 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-875270 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (43.50974281s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-875270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-875270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.014679477s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-875270 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-875270 --alsologtostderr -v=3: (10.468205841s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-875270 -n newest-cni-875270
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-875270 -n newest-cni-875270: exit status 7 (63.05017ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-875270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-875270 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0717 19:02:09.805403   21577 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19283-14386/.minikube/profiles/kindnet-235476/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-875270 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (32.940978064s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-875270 -n newest-cni-875270
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-875270 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-875270 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-875270 -n newest-cni-875270
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-875270 -n newest-cni-875270: exit status 2 (226.115998ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-875270 -n newest-cni-875270
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-875270 -n newest-cni-875270: exit status 2 (222.423129ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-875270 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-875270 -n newest-cni-875270
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-875270 -n newest-cni-875270
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.21s)

                                                
                                    

Test skip (40/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.2/cached-images 0
15 TestDownloadOnly/v1.30.2/binaries 0
16 TestDownloadOnly/v1.30.2/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
50 TestAddons/parallel/Volcano 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 2.8
272 TestNetworkPlugins/group/cilium 3.06
285 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-235476 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-235476

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-235476

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-235476

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-235476

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-235476

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-235476

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-235476

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-235476

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-235476

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-235476

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-235476

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-235476" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-235476" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-235476

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-235476"

                                                
                                                
----------------------- debugLogs end: kubenet-235476 [took: 2.665626245s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-235476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-235476
--- SKIP: TestNetworkPlugins/group/kubenet (2.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-235476 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-235476" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-235476

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-235476" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235476"

                                                
                                                
----------------------- debugLogs end: cilium-235476 [took: 2.9298164s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-235476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-235476
--- SKIP: TestNetworkPlugins/group/cilium (3.06s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-341716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-341716
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard